Archive

Posts Tagged ‘Microsoft’

Porting Microsoft SQL Server to Linux

January 15, 2012 Leave a comment

So why didn’t Microsoft take SQL Server to *nix?  On one occasion a partner commitment that might have made it viable failed to materialize.  On another occasion I initiated the investigation on the basis of a partner request but then decided it was a bad idea.  Here is why:

There are five things you have to consider when evaluating if Microsoft should take SQL Server to *nix:

  1. What exactly is the product offering you intend to bring to *nix, does it have a real market there, and can you position the offering to succeed?
  2. What is the impact of going multi-platform on the product family, engineering methodology, organization, and partner engineering organizations?
  3. What is the business model, including how do you partner, market, and (very importantly) sell into the Enterprise *nix world when you are a company that has no expertise in doing so?
  4. How do you provide Enterprise-class service for SQL Server when it is running on a platform that your services organization has no expertise with?
  5. What is the negative business impact on with entire Windows platform associated with making a key member of the server product family available on *nix?

via Porting Microsoft SQL Server to Linux | Hal’s (Im)Perfect Vision.

Microsoft imagines interacting with data and information in 2020

November 7, 2011 Leave a comment

To BLOB or Not To BLOB – Microsoft Research

October 23, 2011 Leave a comment

Application designers often face the question of whether to store large objects in a filesystem or in a database. Often this decision is made for application design simplicity. Sometimes, performance measurements are also used. This paper looks at the question of fragmentation – one of the operational issues that can affect the performance and/or manageability of the system as deployed long term. As expected from the common wisdom, objects smaller than 256K are best stored in a database while objects larger than 1M are best stored in the filesystem. Between 256K and 1M, the read:write ratio and rate of object overwrite or replacement are important factors. We used the notion of “storage age” or number of object overwrites as way of normalizing wall clock time. Storage age allows our results or similar such results to be applied across a number of read:write ratios and object replacement rates.

via To BLOB or Not To BLOB: Large Object Storage in a Database or a Filesystem – Microsoft Research.

SQL PASS 2011 – SQL Azure gets a facelift

October 14, 2011 Leave a comment

The delta:

  1. The maximum database size for individual SQL Azure databases will be expanded 3x from 50 GB to 150 GB.
  2. Federation. With SQL Azure Federation, databases can be elastically scaled out using the sharding database pattern based on database size and the application workload.  This new feature will make it dramatically easier to set up sharding, automate the process of adding new shards, and provide significant new functionality for easily managing database shards.
  3. New SQL Azure Management Portal capabilities.  The service release will include an enhanced management portal with significant new features including the ability to more easily monitor databases, drill-down into schemas, query plans, spatial data, indexes/keys, and query performance statistics.
  4. Expanded support for user-controlled collations.

Source: Just Announced at SQL PASS Summit 2011: Upcoming Increased Database Limits & SQL Azure Federation; Immediate Availability of Two New SQL Azure CTPs – Windows Azure – Site Home – MSDN Blogs.

Now Microsoft comes up with a road map for Hadoop efforts

October 14, 2011 Leave a comment

Here is the post: Microsoft’s Big Data Roadmap & Approach – SQL Server Team Blog – Site Home – TechNet Blogs.

As we have noted in the past, in the data deluge faced by businesses, there is an increasing need to store and analyze vast amounts of unstructured data including data from sensors, devices, bots and crawlers and this volume is predicted to grow exponentially over the next decade. Our customers have been asking us to help store, manage, and analyze these new types of data – in particular, data stored in Hadoop environments.

I am not sure about IBM, but this makes two of them – Microsoft and Oracle, finally realizing the power of Hadoop and acknowledging that they have to do something about it. Pretty late already, I will say. Earlier, Oracle had something similar to say in OpenWorld 2011.

My Hadoop experience so far has been limited, but nevertheless amazing in every sense. I was blown away by the simplicity, more than anything else. It takes very minimal effort (okay, you do need the understanding) to setup a Hadoop cluster and run a task distributed on several machines. We were using 5 laptops and although the task we wrote was a simple search, but with the effort that was put in, I never expected it to work. I was surprised when it sailed through.