Menu
The real reason Microsoft open sourced .NET

The real reason Microsoft open sourced .NET

DevOps, microservices, and the shift to containers and lightweight computing environments explain a lot about Microsoft’s position on .NET, open source and Nano Server.

With its engineers involved in more than 2,000 open source projects, you’d have to agree that open source has more than a foothold at Microsoft these days. Most recently, for example, the browser team made the Chakra JavaScript engine that powers both Edge and Internet Explorer open source, for a very practical reason.

Node, the popular JavaScript runtime, currently works only with Google’s V8 JavaScript engine. With Chakra now open source, Microsoft can take the fork of Node that it created to run on Chakra and contribute it back to the project – which means developers who use Node will have the choice of using it with Edge as well as with Chrome, opening up a much bigger market for Microsoft’s browser technology.

The shift in how enterprises want to do development explains a lot about the open sourcing of .NET and ASP.NET as well. Partly, it’s to get the community involved – taking advantage of the ideas and expertise of developers who embrace open source projects. Software companies like Fog Creek and Xamarin that have written their own .NET compilers have already replaced those with Microsoft’s open source Roslyn .NET compiler.

Microsoft also wants to bring these technologies to Linux, in large part because of Azure. Running a cloud platform gives Microsoft an interest in Linux that goes far beyond the open source contributions the Windows Server team has been making to the Linux kernel so that distributions run will on its Hyper-V hypervisor. As of September 2015, more than 20 percent of the virtual machines running on Azure IaaS were Linux, and Microsoft has even persuaded Red Hat to support Azure – in addition to AWS – with its CloudForms cloud management platform.

“As we pursue our vision of the fabric and the cloud anywhere, that is as much a story about supporting Linux workloads as it is Windows workloads,” says lead architect for Windows Server, Jeffery Snover.

“Throughout our organization, each one of the teams now have Linux teams within them,” says Snover. “We have historically had the group in Windows Server doing Linux support for Hyper-V and they have made fantastic strides there; we have fantastic network support in Technical Preview 4.” There’s already a Linux version of the PowerShell Desired State Configuration tool, to make it easier to manage Windows Server and Linux with the same tools.

[Related: How tech giants spread open source programming love]

“And so too,” says Snover, “the .NET team is taking .NET and making it available on Linux.”

That suits customers like the FiOS team at Verizon, which is using Linux clusters for Docker containers deployed with Mesos, to run .NET and ASP.NET 5. It makes sense that Microsoft would rather keep Verizon as a customer at least for its development platform and not just so they can sell them tools like Visual Studio. In future, when Windows Server 2016 brings support for Docker, containers and the lighter-weight Nano Server option, Microsoft has hopes of winning them back; that’s far more likely if they’ve stayed with .NET, even on Linux.

The reasons customers like Verizon give Microsoft for wanting .NET running in containers isn’t because they want to move to Linux for its own sake, according to Snover, and it leaves a definite opportunity for Windows Server.

“When you pull on that thread, what really motivated them is the desire to have a really lightweight compute environment, and the ability to stand up and restart and scale things very, very agilely,” says Snover. “This was something they were not able to achieve with a full Windows Server stack and the full .NET stack. They will be able to do that now, with Windows Server, thanks to Nano Server and our container work.”

Moving to microservices

.NET itself is changing, as the recent name change for the open source version (from .NET Core 5 and ASP.NET 5 to .NET Core 1.0 and ASP.NET Core 1.0), underlines. .NET Core doesn’t cover as much as the full .NET 4.6 framework (it doesn’t have the server-side graphics libraries, for instance). The same goes for ASP.NET 4.6 and 5 (which has the Web API but not SignalR, VB or F# support yet). The newer versions don’t completely replace the current versions, although they’ll get the missing pieces in the future. They’re also built in a new way, with faster releases and more emphasis on moving forward than on avoiding breaking changes.

That’s the same shift you’re seeing across Microsoft. Over the last decade, building Azure has taught the company a lot about the advantages of microservices for what would otherwise be large, monolithic applications. The original service front end managed resources like compute, storage, networking and the core infrastructure components – for the whole worldwide service – in a single app. It was a large and complicated codebase, running in a single data center, and it took up to a month to release an update – after it was finished and tested – which meant it was only updated once a quarter. Plus, the management tools for all the different components were secured by a single certificate.

[Related: 10 products that big tech companies have open-sourced recently]

Rewriting that as around 25 different microservices makes it easier to develop, test and release new features. New features can be “flighted” to a test system to see how they perform, and releasing updates takes no more than three days … even though the resource providers that manage compute, storage and networking now run in the individual data centre. That improves performance because there’s far less latency when, for instance, the compute used in the Azure data centre in Australia is managed by a resource provider running in that same data center rather than in Texas. Putting compute and data together isn’t just faster, and easier to scale; it makes things more reliable, because you’re not relying on the network between data centers for management. Limiting each microservice to operating in its own area improves security too.

These are the usual advantages of well-designed microservices architectures, and Microsoft is trying to give businesses an easy way to use them with Azure Service Fabric. This is a .NET-based microservices platform (running across a cluster of physical or virtual machines) that it started building as Windows Fabric back in 2003. Azure SQL Database was the first service built on it; now Azure Document DB, Event Hubs, Cortana, Intune, Power BI, Skype for Business, the Azure IoT Suite and all the virtual machines in the Azure core infrastructure are built with Service Fabric.

In the future, Service Fabric will also support Linux, Docker or Java. Service Fabric is available on Azure today, and you’ll be able to run it on your own servers (or hosted on other cloud providers), as part of the Azure Stack technical preview (which should be a finished product by the end of 2016).

Companies like Verizon might be ahead of the curve, but for new applications designed to take advantage of cloud technologies, containers, microservices and faster, more nimble development is going to be key. “Everybody is after the same thing,” Microsoft’s Snover says. “They want to be able to develop their apps to be as small and as resource efficient as possible. And associated with small comes agile, secure and fast.”

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Microsoft

More about AustraliaAWSCreekGoogleLinuxMicrosoftRed HatSkypeVerizon

Show Comments
[]