29. March 2011 16:49
The New York Times story about GE paying $0 in federal taxes for 2010 has been making the rounds, with reactions ranging from “eh” to apoplectic outrage.
One particularly interesting response was from Yahoo Finance blogger Henry Blodget who wrotethat we can’t blame GE. Instead, we should blame the complex tax code which results in rich people and companies hiring high priced accountants to find loopholes. The answer, he says, is to move to a consumption tax.
First, you should blame the law makers who wrote those loopholes (often knowingly) into the tax code. Second, you should want to re-write the tax code to close the loopholes, not dump it in favor of a severely regressive tax system that would even further punish the poor and middle class.
If the goal is to get entities (people or corporations) to pay what the intent of the current tax law says they should pay, then lets do that. Progressive tax systems are, by their nature, complex. Are flat taxes and consumption taxes easier to understand? Sure. Does that make them more likely to result in the ideal (from a progressive point of view) tax structure which maximizes overall GDP as well as revenue? No. Definitely not.
Of course, “easy to understand” one-liner tax policy has a good chance of winning in the long term – even if the people who vote for it are hurting themselves by doing so.
16. March 2011 13:28
There was a nasty bug in SQL Server 2008 RTM and SP1 that could result in a deadlock when you attempted to create a new schema object, such as a stored procedure, that used a user defined data type that was created in the same transaction. The error doesn’t mention the data type at all. Instead, it’s the normal deadlock error as seen below:
Transaction (Process ID 54) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
This bug was reported on 9/3/2008 on Microsoft Connect. and they fixed the issue in SQL Server 2008 SP2, but they decided not to include the fix in SQL Server 2008 R2. So scripts that run great on 2008 SP2 will fail on R2. Wonderful.
Thankfully, the work around is an easy one. You just have to make sure your data type is created outside of the transaction that creates the objects that use it. This makes failed script cleanup a bit harder, but it’s not the end of the world.
10. March 2011 23:10
After .NET 4.0 came out I read all about the new cool stuff in it and filed as much of it into the back of my brain as I could. I saw the new dynamic keyword, and thought “oh, that’s kinda cool”, but that’s about where my thought process ended.
A few minutes ago I read about Massive, which is a super simple ORM (really, it’s not an ORM… more like a relational converter, since the “objects” don’t exist until the data comes back) which takes advantage of dynamic typing to produce strongly typed domain objects from almost any database. Notice I said produce rather than generate. This isn’t a code generator we are talking about here. Thanks to dynamic, it doesn’t need to be.
At its core, Massive takes the LINQ queries that we all know and love, executes the appropriate queries against the database (via built in ADO.NET libs from System.Data.Common), and then it does something special. It takes the resulting DbDataReader, examines the meta data associated with that reader, and creates – on the fly – a strongly typed class that represents that data via a dynamic type.
Below is a code sample snagged from the above referenced blog post:
That super simple chunk of code is what turns a DbDataReader’s result set into a strongly typed domain object. That, my friends, is freaking awesome.
When I realized how this guy was using dynamic types, I had a “whoa” moment. I love those moments, because unlike filing stuff into the back of my brain, a whoa moment almost always results in a change in the way I approach programming problems. This whoa is perhaps on the same level as the whoa I had when I saw my first lambda expression. This is good stuff.
10. March 2011 22:35
I recently ready Jimmy Bogard’s blog post regarding versioning strategies for the sane. His suggestion, to use what amounts to a timestamp for your assembly versioning, is more or less the approach that Telerik takes.
One thing I don’t like about this strategy is it kind of prevents you from being able to call your release “v2” or “v4” or whatever.
Sure, you can call your release whatever you want to call it, but your assembly version is 2011.1.225, and you’re calling the release “v3”, then that’s bound to cause some confusion.
The approach I take is the best of both words. The major and minor segments are determined by me (usually based on some arbitrary marketing strategy), while the revision and build segments are the time stamp.
For instance: 2.1.2011.0691 = Version 2.1, built on the 69th day of 2011. (March 10th).
The trailing 1 indicates there was a single build that day. If multiple builds took place, that trailing digit would indicate the final build number. So 2.1.2011.0694 would indicate it was the 4th build, etc.
The build number can never go above 9 since that would create ambiguity as to what was the day of year and what was the build number, but if you’re doing more than 9 builds on the day you freeze your release, you’ve got other problems.
This leaves me with some control over my version number, whether that’s for marketing reasons, or simply because that’s the way I want it – damn it.