Archive

Posts Tagged ‘Microsoft SQL Server’

Database engine running on a GPU

January 27, 2012 Leave a comment

Alenka is a modern analytical database engine written to take advantage of vector based processing and high bandwidth of modern GPUs.

Features include:

Vector-based processing

CUDA programming model allows a single operation to be applied to an entire set of data at once.

Self optimizing compression

Ultra fast compression and decompression performed directly inside GPU

Column-based storage

Minimize disk I/O by only accessing the relevant data

Fast database loads

Data load times measured in minutes, not in hours.

Open source and free

Some benchmarks :

Alenka : Pentium E5200 (2 cores), 4 GB of RAM, 1x2TB hard drive , NVidia GTX 260

Current Top #10 TPC-H 300GB non-clustered performance result : MS SQL Server 2005 : Hitachi BladeSymphony (8 CPU/8 Cores), 128 GB of RAM, 290x36GB 15K rpm drives

Current Top #7 TPC-H 300GB non-clustered performance result : MS SQL Server 2005 : HP ProLiant DL585 G2 (4 CPU/8 Cores), 128 GB of RAM, 200x36GB 15K rpm drives

via ålenkå – Browse Files at SourceForge.net.

Advertisements

Porting Microsoft SQL Server to Linux

January 15, 2012 Leave a comment

So why didn’t Microsoft take SQL Server to *nix?  On one occasion a partner commitment that might have made it viable failed to materialize.  On another occasion I initiated the investigation on the basis of a partner request but then decided it was a bad idea.  Here is why:

There are five things you have to consider when evaluating if Microsoft should take SQL Server to *nix:

  1. What exactly is the product offering you intend to bring to *nix, does it have a real market there, and can you position the offering to succeed?
  2. What is the impact of going multi-platform on the product family, engineering methodology, organization, and partner engineering organizations?
  3. What is the business model, including how do you partner, market, and (very importantly) sell into the Enterprise *nix world when you are a company that has no expertise in doing so?
  4. How do you provide Enterprise-class service for SQL Server when it is running on a platform that your services organization has no expertise with?
  5. What is the negative business impact on with entire Windows platform associated with making a key member of the server product family available on *nix?

via Porting Microsoft SQL Server to Linux | Hal’s (Im)Perfect Vision.

SQL PASS 2011 – SQL Azure gets a facelift

October 14, 2011 Leave a comment

The delta:

  1. The maximum database size for individual SQL Azure databases will be expanded 3x from 50 GB to 150 GB.
  2. Federation. With SQL Azure Federation, databases can be elastically scaled out using the sharding database pattern based on database size and the application workload.  This new feature will make it dramatically easier to set up sharding, automate the process of adding new shards, and provide significant new functionality for easily managing database shards.
  3. New SQL Azure Management Portal capabilities.  The service release will include an enhanced management portal with significant new features including the ability to more easily monitor databases, drill-down into schemas, query plans, spatial data, indexes/keys, and query performance statistics.
  4. Expanded support for user-controlled collations.

Source: Just Announced at SQL PASS Summit 2011: Upcoming Increased Database Limits & SQL Azure Federation; Immediate Availability of Two New SQL Azure CTPs – Windows Azure – Site Home – MSDN Blogs.

Now Microsoft comes up with a road map for Hadoop efforts

October 14, 2011 Leave a comment

Here is the post: Microsoft’s Big Data Roadmap & Approach – SQL Server Team Blog – Site Home – TechNet Blogs.

As we have noted in the past, in the data deluge faced by businesses, there is an increasing need to store and analyze vast amounts of unstructured data including data from sensors, devices, bots and crawlers and this volume is predicted to grow exponentially over the next decade. Our customers have been asking us to help store, manage, and analyze these new types of data – in particular, data stored in Hadoop environments.

I am not sure about IBM, but this makes two of them – Microsoft and Oracle, finally realizing the power of Hadoop and acknowledging that they have to do something about it. Pretty late already, I will say. Earlier, Oracle had something similar to say in OpenWorld 2011.

My Hadoop experience so far has been limited, but nevertheless amazing in every sense. I was blown away by the simplicity, more than anything else. It takes very minimal effort (okay, you do need the understanding) to setup a Hadoop cluster and run a task distributed on several machines. We were using 5 laptops and although the task we wrote was a simple search, but with the effort that was put in, I never expected it to work. I was surprised when it sailed through.