Windows 7: Optimized for Parallel Processing

October 1, 2008

The upcoming Windows 7 OS from Microsoft is slated to be the replacement for Vista. The lucky few who got to see some of the early build versions have focused mainly on the UI tweaks incorporated. However, the differences are more than skin deep.

Apparently, there has been slight tweaking of the new OS’ core to support parallel processing better. However, to ensure application and driver compatibility, most of these under the hood tweaks have been kept to minimal levels. With the Windows core, Win32 being dismissed as being unsuitable for a synchronous, concurrent computing, Microsoft is facing a dilemma of sorts – whether to go for an all-out change ground up, or take the slower path to evolution where they will eventually “phase out” the good old win32. The former, however, seems unlikely in the near future – thanks to the Vista debacle and fears of another backlash owing to the dreaded “non compatibility” ghost that has been hounding Microsoft ever since the misadventure that was Vista.

That said, as a long term plan, Microsoft is inching ahead to find a way of dissociating Windows from Win 32 – albeit gradually, and replacing it with managed code which will add the much needed full fledged parallel processing support. Managed code here refers to a set of programming interfaces optimized for handling parallel processing tasks spanning multiple processors. Many incubation projects including RedHawk and Midori are already heading in this direction.

If things work out as planned, the run of the mill Windows which is bound to struggle with future computers that could run on processors having as many as 8, 16, or 32 cores, will get a new lease of life. It remains to be seen what optimization techniques Windows 7 and sibling Windows Server 2008 R2 incorporates prior to the early release expected very soon!

Advertisements

Samsung Gets 16 GB Memory Modules Ready

October 1, 2008

What would you do with lots of RAM? If you were like us, you’d first get the RAM, and then worry about what to do with it. No such thing as too much memory, we always say. Imagine our delight, then, when we read that Samsung has announced that it’s developed a 50nm fabrication technique that will allow for up to 16GB of memory on a single memory module. And since we love dual-channel memory so much, we’d naturally like to stick two of them into our systems. Thirty-two gigs. Mmmm.

Imagine the possibilities: a Windows with no page file on your hard disk. There will be no need for nonsense like ReadyBoost, because all your frequently used files and programs will fit into your system memory, and you’ll still have RAM to spare. The icing? They’ll be DDR3 chips.

Of course, that’ll mean that you’ll need to upgrade to a 64-bit operating system to enjoy the goodness, and even though our experiences with Vista 64-bit have been positive, some niggles remain.

Then again, there’s lots of time to worry about all that. Sixteen-gig modules aren’t expected to hit till 2009, and even then, are likely to be very, very expensive. If you want to upgrade to 16GB today, you need to shell out around $3,143 (and that’s after a discount), for something like this. We’ll just stick to our dreams for now.


VMware Fusion 2 now available

September 28, 2008

Around the same time that VMware was embracing the Virtual Datacenter OS and the “internal cloud,” it was also launching the latest version of its Mac desktop virtualization product, Fusion 2.0.

“VMware Fusion 2 makes it easy and fun for every Mac user to run the Windows applications they need while enjoying the Mac experience they want,” said Pat Lee, group manager for consumer products, VMware. “Our goal is to break down the walls between Windows and the Mac by creating a user-friendly, Mac-native experience that lets our customers run any Windows application, seamlessly and safely, on the Mac. We want our customers to see that Windows really is better on the Mac.”

According to VMware, Fusion 2 adds more than 100 new features and enhancements to the product, and they are claiming to deliver the most advanced Mac virtualization software available today. Today, perhaps, because virtualization platform vendor Parallels is planning to release its Desktop for Mac 4 product soon enough. While the heat between these two companies in the Mac space has cooled down somewhat during the recent months, things could be heating up again real soon.

Among the changes in Fusion 2 is a new take on protection. In addition to being able to take multiple snapshots in any number of states, VMware has also added “AutoProtect” which automatically records snapshots of running VMs at regular intervals. They’ve also added virus protection. OK, so it isn’t some cool, new cyber agent type thing, it’s a 12-month subscription to McAfee VirusScan Plus. Still, not bad.

As far as display goes, if you happen to have 10 monitors lying around, guess what? You now have the ability to run applications on all 10 displays with Fusion’s multiple monitor support. Graphics have also been enhanced. VMware has added new 3D graphics support and compatibility with DirectX 9.0c and Shader Model 2 for software and games.

VMware claims that Fusion 2 supports an impressive list of more than 90 operating systems, including Windows Vista and Windows XP. (Don’t forget, you still need to purchase separate operating system licenses.) This is one of the advantages that VMware has over competitors. And now, they also offer experimental support for Mac OS X Server 10.5 (Keep in mind this does not include the standard edition of the OS). In addition, users can also now operate virtual machines with up to four virtual CPUs (Remember, the guest operating system will need to support that number of processors as well).

Feature favorite Unity is still around, breaking down the walls between Windows and Mac OS X, transforming Windows applications to work seamlessly within OS X like native applications. Users can launch any Mac file with any Windows application, seamlessly share data and folders between Windows and Mac, and even custom map the Mac keyboard to special keystrokes for Windows applications.

VMware Fusion 2 is a free, downloadable upgrade for all VMware Fusion 1.x customers. So what if you don’t have Fusion 1.x? Well, you can buy it outright at retail for $79.99. But if you own a competitor product and you feel the need to switch, you can grab a $30 rebate offer until the end of the year.


Microsoft amassing high-performance server software attack

September 28, 2008

Microsoft has built a strategy around the planned early-November release of its high-performance computing server that it hopes will be the catalyst to deliver massive computing power for future applications.

The strategy encompasses Microsoft applying its typical mantra of “simplifying computing” to the costly and often complex high-performance computing world in the form of its Windows HPC Server 2008 surrounded by Microsoft’s collection of applications, management wares, development tools, and independent software vendor community.

“We are not talking about a lot of unique product development here; it is mostly about packaging and coming up with appropriate licensing,” says Gordon Haff, an analyst with Illuminata. “But as HPC becomes more and more mainstream and used for all kinds of commercial roles, whether it is product design or business analytics, Windows is not such an unnatural fit as it might have been in the past.”

Microsoft last week said it would release on Nov. 1 HPC Server 2008, the company’s most competent move to date to offer parallel computing horsepower to corporations doing more real-time simulations, designs, and number crunching.

But the road is decidedly uphill.

Microsoft currently lays claim to less than 5 percent of HPC server market revenue, according to IDC. Those numbers compare with 74 percent for Linux and just more than 21 percent for Unix variants.

In addition, competitors such as Red Hat have been offering its Enterprise Linux for HPC Compute Nodes since last year. And Sun late last year reentered the HPC fray with its Constellation System.

Those sorts of challenges, however, have not deterred Microsoft in the past.

The company is betting users such as engineers will combine workflows running on their Windows workstations with Windows-based back-end HPC clusters, or move those workloads off the desktop and into an HPC infrastructure.

Microsoft also envisions such desktop /back-end combinations as Excel users performing a function call from their desktop, which in the background executes an agent that runs some computational algorithms on a networked HPC cluster and returns an answer. The user would have no concept of the back-end tied to Excel, which is widely used in financial services.

Since the 2006 release of Windows Compute Cluster Server 2003, Microsoft has been working with partners such as HP and Intel to create mass market appeal for HPC and the message may finally be striking a chord as prices drop and performance rises on technical computing platforms.

But Microsoft, experts say, isn’t likely to climb the ladder and replace high-end HPC environments built on Linux and Unix.

The real opportunity is appealing to new buyers with a Windows desktop infrastructure looking anew at HPC for workgroups or departments.

IDC says HPC hardware revenue 2007 alone generated by workgroup and departmental platforms was nearly $5.5 billion, just more than half of the $10 billion total. The prices on platforms in those segments range from $100,000 and below (workgroup) to $100,000 to $250,000 (departmental).
Microsoft’s recent hardware-software partnership with Cray on the CX1 “personal” supercomputer aimed at financial services, aerospace, automotive, academia, and life sciences and priced at $25,000 is testament to Microsoft’s plan — as is the $475 per node price of HPC Server 2008.

That’s not to say Microsoft won’t make a run for the top. Earlier this year, a Windows Server 2008 HPC cluster built by the National Center for Supercomputing Applications garnered a No. 23 ranking on the list of the world’s top 500 largest supercomputers, achieving 68.5 teraflops and 77.7 percent efficiency on 9,472 cores.

But experts say Microsoft’s sweet spot will be much lower down the list.

“The Microsoft strategy is aiming hardest at verticals where Windows is strong on the desktop and then extending that Windows environment upward,” says Steve Conway, research vice president for technical computing at IDC. “It includes applications such as Excel and tools like Visual Studio so people can unify their desktop and server workflow.”

Microsoft also plans to integrate HPC Server with its System Center tools for application-level monitoring and rapid provisioning by releasing an HPC Management Pack for System Center Operations Manager by year-end, according to Ryan Waite, product unit manager for HPC Server 2008.

The company is aligning HPC Server 2008 with Visual Studio Team Services, and F#, a development language, designed to help write new applications and rewrite old ones for parallel computing environments.

“We are looking at the holistic system,” says Vince Mendillo, director of HPC in the server and tools division at Microsoft.

Familiarity is the big theme. Windows HPC Server 2008 is built on the 64-bit edition of Windows Server 2008.

The platform combines into a single package the operating system with a message passing interface and a job scheduler built by Microsoft.

The server software, built to scale to thousands of cores, also includes a high-speed NetworkDirect RDMA, Microsoft’s new remote direct memory access interface, and cluster interoperability through standards such as the High Performance Computing Basic Profile specification produced by the Open Grid Forum. The server features high-speed networking, cluster management tools, advanced failover capabilities and support for third-party clustered file systems.

“HPC is no longer a niche either in terms of hardware platform or in terms of pervasiveness,” Illuminata’s Haff says. “For the most part, it is using volume hardware and is being applied to all kinds of problems in all kinds of companies and organizations.”

It is that trend that Microsoft is betting on.

“We can take people’s apps on Windows workstations and automatically scale those apps with supercomputer capabilities on the back end,” Microsoft’s Waite says. “When you pull all those pieces together in an integrated fashion, HPC becomes easier to use.”


The technology behind an F1 race

September 27, 2008

When the streets of Singapore come alive with F1 action this weekend, it may be easy to forget how much technology is involved to enable the cars to whiz through the track at breakneck speeds.

Perhaps the most noticeable equipment will be the lights lining the track. Designed by Italian lighting contractor, Valerio Maioli, the Philips-made system will involve some 1,500 lighting projectors around the track, lighting it to the level of 3,000 lux–nearly four times brighter than a typical sports stadium.
Provision has been made for wet weather in the tropical city: the projectors will beam light on the track at different angles, rather than vertically, to minimize glare off the road surface should it rain.

The power requirements of these lights are correspondingly stringent. While many of the teams will plug their backend IT systems into the country’s power grid, Valerio Maioli has fitted 12 twin-power generators to power the lights. These 24 generators are also fail-resistant–the second generator will pick up the load should the first one fail, to keep the light-levels consistent.

But green supporters should rest easy, a Philips spokesperson told ZDNet Asia. The lighting system is 16 percent more energy efficient compared to competitors’ products, said the spokesperson.

Another noticeable addition to the track from Valerio Maioli will be digital flags–electronic light displays which will replace the traditional colored flags used in day races, for better visibility at night. These 35 panels will communicate vital information to drivers.

Supercomputing in Formula One
Behind the scenes is where you will find the heavy-duty computing power. Alex Burns, chief operating officer of the Williams F1 team described to ZDNet Asia in an interview the magnitude of the systems involved, both leading up to the event and during the actual race itself.

Burns said the team takes 35 Lenovo Thinkpad laptops to the circuit, to be used by race engineers. In the garage by the pit stop, there are another eight racks of servers: two for the data coming off each of the two cars, and another two for each car’s engines, he said.

All of this is necessary to store and process the terabytes of telemetry data that is taken from over 100 sensors on each car, so that engineers can fine-tune performance.

Drivers use this data to compare and learn from each other while on the track, too, according to Williams team driver, Nico Rosberg.

Speaking at a media briefing, Rosberg said: “I can compare [driving] data with teammates, and immediately adapt my driving style to the portions of the track where they were faster around the corners.”

The Williams team also relies on a custom-built supercomputer, which crunches out simulation data for drivers. Rosberg has tried out Singapore’s track on a simulator already, according to Burns.

The computing power required is directly related to the quality of the simulation, because tracks are reproduced to within a 2-milimeter accuracy off their surfaces, said Burns. “We laser scan every bump, and simulate light conditions too.”

Speedy connectivity wins the race

All of this data is monitored at the track, and also by engineers located at the U.K.-based Williams headquarters at the same time.

The Williams team uses two AT&T network nodes in each race to relay information back to headquarters.

The time taken to move the gigabytes of telemetry data has in recent years been cut down by advances in communications technology, Burns said. What took an hour to transmit two years ago took 20 minutes last year, and is now down to seven to eight minutes.

Martin Silman, executive director of AT&T’s global concept marketing arm, told ZDNet Asia that speedy connections make crucial differences to the team’s preparation time.

“Once testing finishes on Saturday evening in Singapore, the teams have exactly two hours to get data back to their factories in the United Kingdom,” he said. The data will be used to update race strategies and churn out new settings for the cars before the final race on Sunday.

Another team, BMW Sauber, relies on an IP-based connection provided by T-Systems. Both parties told ZDNet Asia in a joint e-mail interview high speed data transmissions keep teams lean so no additional personnel is required onsite for the Singapore race. “During the race weekend the team is focused on the race challenge, and our R&D projects are temporarily paused,” said the teams. Data is exchanged from the race location with the BMW Sauber’s home bases in Munich and Hinwil.

Sorting through the terabytes
Besides impressive hardware, F1 teams rely on software products to organize the data collected, and automate heavy processes like analyses.

Perhaps surprisingly, the Vodafone McLaren Mercedes team uses two off-the-shelf products from SAP and Microsoft, rather than customized software. This has saved the team money on customization costs, according to SAP.

The team uses SAP’s software to coordinate work on engine design and development between 400 personnel. The data also monitors the life cycle of 3,000 engine components.

According to Microsoft, SQL Server 2008 is used to manage data generated by the engine control unit (ECU). This data is pushed out to Microsoft Excel for analysis and visualization, as well.


Oracle Hangs Shingle on Hardware Store

September 26, 2008

Oracle (Nasdaq: ORCL) Latest News about Oracle CEO Larry Ellison startled attendees at the company’s Openworld conference — as well as the rest of the industry — with his announcement that Oracle and HP (NYSE: HPQ) New HP LaserJet P4014n Printer Starting at $699 after $100 instant savings. Latest News about Hewlett-Packard are joining forces to build computer hardware.

It is Oracle’s first direct foray into hardware manufacturing. HP will actually make the line of data warehouse application computers; Oracle will market them under its own brand.

Specifically, the duo will develop the Exadata Storage Server and the Database Machine — a programmable storage server World Class Managed Hosting from PEER 1, Just $299. Click here. and an advanced database server. This is an endeavor that has been in the making for the last three years, according to Ellison.
New Territory

The purported goal of the servers is to speed the performance of Oracle software; the company claims that the storage server was designed to push data more quickly to the databases by pairing Intel (Nasdaq: INTC) Latest News about Intel multicore processors with memory.

According to tests that Oracle has run of its prototypes, the machines process information 10 times faster than systems currently in use.

The reaction to Oracle’s surprise announcement has generally been favorable. The database industry is very mature, and Ellison has stretched Oracle’s software capabilities about as far as they can go through many major and minor acquisitions over the past few years.
IBM’s Shadow

Oracle was likely searching for a way to ratchet up the competition with IBM (NYSE: IBM) Latest News about IBM, Charles King, principal of Pund-IT, told TechNewsWorld.

“It is something of a disadvantage to go up against IBM, which has both the software and the hardware to play with to develop highly customized offerings for its own DB2 solutions,” he said. “IBM is a partner with Oracle, but I think Oracle feels that there are market opportunities which it can leverage through other partnerships.”

Ultimately, the new servers will appeal most to Oracle and HP’s own installed base, King speculated. “I don’t see them breaking a lot of new greenfield ground with this. It is too narrowly focused a product to attract anyone outside of the dedicated base.”