What is the biggest problem you see in Microsoft's approach to Services and how does ServiceStack try to tackle it?
Some of Microsoft's problems are a result their general attitude to what they believe constitutes good framework design, others are just side-effects of trying to satisfy designer-first tooling and trying to provide a familiar API to designer-led developers, i.e:
- Promotion of C# RPC method calls for service API designs leading to the design of chatty, client-specific APIs
- Choosing to standardize around brittle, bloated, slow and over-complicated SOAP and WS-* serialization formats
- Trying to achieve end-user simplicity with heavy artificial abstractions, UI designers and big tooling
- Creating heavily abstracted APIs in an attempt to present developers an artificial server-side object model
- Trying to unify all network endpoints under a shared abstracted object model
- Overuse of XML Configuration and code-gen
- Testability and Performance treated as after thoughts
RPC method signatures
The pursuit of providing developers a familiar RPC API with rich UI tooling in VS.NET (E.g. with the "Add Service Reference" dialog) trades initial simplicity at the expense of promoting remote service anti-patterns that leads to a path of un-necessary friction and brittleness in the long run. Unfortunately this same design is perpetuated in every web service framework Microsoft has produced, which have always promoted the creation of RPC Service API designs encouraging developers to treat remote services like local method calls. This is harmful in many ways, remote services are millions of times slower than method calls that by definion encapsulate an external dependency that is more susceptible to faults. Having your implicit service contract tied to server RPC method signatures also couples and blurs the implicit Service Contract of your API to its server implementation. Not enforcing a well-defined boundary is what encourages poor separation of concerns and the bad practices of returning Heavy ORM data models over the wire which because of their relational structure, cyclical relations are by design poor substitutes for DTOs that will sporadically fail. They also couple your implicit service contract to your underlying RDBMS data models imposing friction to change. Remote APIs are also disconnected from their server call sites which continue to evolve after client proxies are deployed, ideally frameworks should be promoting evolvable, resilient API designs to avoid runtime failures when services change.
Here's an illustration of the differences between the chatty RPC API that WCF promotes vs the Message-based API ServiceStack encourages. Whilst this example of the Web API tutorial re-written in ServiceStack shows how a message-based API encourages fewer, less chatty, more re-usable services.
By embracing Martin fowler's remote service best practices ServiceStack has been able to avoid many of these pitfalls that have historically plagued .NET web service developers:
Encourages the use of message-based, coarse-grained/batch-full interfaces minimizing round-trips, promoting the creation of fewer but more tolerant and version-able service interfaces. ServiceStack's message-based design ensures developers always create coarse-grained API designs.
Dictating the use of special-purpose POCOs (Plain Old CSharp Objects) to generate the wire format of your web services response. ServiceStack has always encouraged the use of DTOs that that are decoupled from its implementation and kept in its own dependency and implementation-free assembly. This strategy is what allows the same Service Types that are used to define the server with to be shared with all .NET clients providing an end-to-end typed API without the use of code-gen.
By encapsulating all server communications behind an explicit and re-usable client Gateway. This best-practice is violated with code-gen proxies (like WCF) which merges code-gen types with underlying service clients making it difficult to mock, test and substitute with different implementations. It's also common for changes to existing services to cause compilation errors in client code. This is rarely an issue with ServiceStack which is able to reuse generic service clients so the surface-area of change is limited to types, and because of our message-based design any functionality added or removed that isn't used wont affect existing clients.
SOAP Web Services
SOAP is one of those technologies that should have never existed in its current form, or at least be marginalized to only areas where it provides a benefit. It's built on the false premise that in order to enable data interchange you need to opt-into complexity, to be as strict and explicit as possible forcing all implementations to implement a complex schema in order to communicate. In-fact the opposite has proven true, there is much less burden and complexity, the resulting output faster, more interoperable and versionable if a minimal and flexible format as found in JSON, CSV, Protocol Buffers, MessagePack, BSON, etc was used instead.
I always found it hard to imagine that despite being built on top of HTTP, the SOAP designers had the benefit of hindsight when they developed WSDL. I expect it would've been hard to justify how WSDL's introduction of Types, Messages, Operations, Ports, Bindings and Services was somehow a superior replacement for HTTP's simple URL identifier and Accept/Content-Type headers.
Despite it's name SOAP is neither Simple nor an Object Access Protocol and has many properties that make it a poor choice for developing services with, i.e:
- It's bloated and slow - Routinely ending up on the wrong end of size and performance benchmarks
- It's brittle - Simple namespace, field, enum or type changes can cause runtime exceptions
- It's a poor programmatic fit - It doesn't map naturally to any object type system causing friction whenever projecting in and out of application code
- It violates HTTP - HTTP includes simple Verbs for distributed access, e.g. GET's should be side-effect free, allowing it to be Cached by clients and middleware. Since every SOAP request is HTTP POST it is unable to take advantage of one of the primary benefits of HTTP
- It has poor accessibility - A SOAP message is essentially an abstract container designed to transport custom XML payload bodies and SOAP headers. This is a poor approach as abstract message formats require more effort to access and serialize into whilst providing less typed access to your data. It also limits access to your services to only SOAP-Aware clients. As it provides no real-world value many Web APIs will no longer wrap responses in message containers and will simply serialize raw JSON and XML outputs as-is - a practice known as POX/J (Plain Old XML)
- It's complex - SOAP, WSDLs, UDDI, WS-* introduces unnecessarily complexity when they're not needed. Eventually only contributors to the specs (or the SOAP framework developers) have a good understanding of the complete WS-* stack and how to best implement it with the few big iron frameworks that have attempted to adopt it.
- Encourages code-gen - Given its complexity, SOAP is a generally unfeasible endeavour to implement without a SOAP framework and code-gen proxies
Given it goes against many of the core tenants of a service, it's surprising SOAP became as popular as it did. Having spent years developing services for Governments and Large Enterprises, the unfortunate reality is that for many of them exposing SOAP endpoints for their services is a mandatory requirement. However for Internet companies and value-focused Start-Ups it's a relatively non-existant technology, instead most have opted to create simple HTTP Apis returning Plain Old XML or JSON responses.
Despite being a poor option in most cases, ServiceStack continues to enable and support SOAP endpoints for your services as there still exists many enterprise systems that only allow connecting to SOAP endpoints. This is inline with ServiceStack's core objectives of providing maximum utility, accessibility and reach for your services.
SOAP is by design an extremely brittle and fragile format, where you'll typically see run-time errors occuring if a service deviates slightly from the WSDL upon which the client proxy was generated from. Although Failing fast is seen as an advantage in a static type system, it is an undesirable attribute for remote services where you want to achieve backwards and forwards compatibility so server API changes don't break existing clients. This goal is ignored by WCF which promotes RPC method signatures, the SOAP format and code-gen to provide what is likely the most fragile combination of technologies used in web service implementations today.
SOAP vs Protocol Buffers
It's interesting to see how SOAP compares with Google's solution for a simple IDL Protocol Buffers - which it uses for almost all of its internal RPC protocols and file formats:
- It's multiple times smaller and and an order of magnitude faster than SOAP
- Uses a simple DSL to define message-based APIs
- .proto used to define Types that are a good programmatic fit for programming languages
- Includes support for versioning
- Provides native client and server bindings for Python, Java and C++
Despite being developed by different companies each of the above IDL's have ended up being much simpler, faster and easier to use than SOAP which based on the sheer size of the specification appears to have been developed by committees without any regard for performance, size or ease of implementation.
Because of its ubiquity, versatility and ease of implementation it's natively supported in most platforms and is quicky becoming the preferred data format for developing Web APIs - especially ones servicing Ajax clients.
In our own experiences we've found JSON to be a superior format for enabling services. It's a more compact, flexible, tolerant and resilient format that's easier to create, consume and debug, offering less friction and superior productivity over SOAP. The only problem with JSON at the time we started ServiceStack was that the Serializers in the .NET Framework were actually slower than their XML Serializers. Being performance conscience we weren't happy with accepting this trade-off so we released our own JSON serializer that ended up being more than 3x faster than any other .NET JSON Serializer and more than 2.5x faster than any serializer (inc. binary) in the .NET Framework.
Following the same minimal spirit of JSON, we've also developed the JSV Format which is like JSON but uses CSV-style escaping making it slightly faster, more compact and more human-friendly than JSON. It's useful for .NET to .NET services and it's also used for parsing Query Strings where it allows passing complex object graphs in GET requests in ServiceStack.
ServiceStack vs WCF approach
Despite being very different service frameworks in implementation and approach, WCF and ServiceStack have fairly similar goals. ServiceStack takes a very service-orientated approach around building services where it's optimally designed to capture your service implementation in its most re-usable form. From this code-first, typed design we're able to infer greater intelli-gence of your services allowing us to generate XSD's, WSDLs, auto-generated metadata pages, expose pre-defined routes, all automatically. All functionality, features, endpoints and Content-Types added are built around your existing models and provide instant utility without additional effort and changes to application code. We refer to this approach as starting from code-first models as the master authority and projecting out.
WCF's goals are also to provide a services framework to enable services on multiple endpoints but they take a reverse angle, and provide a unified abstraction over all network endpoints which you have to configure your services to bind to. As a primary goal they're also one of the few implementations to have deep support of the WS-* stack.
Ultimately we both aim to provide an easy to use services framework, we just have very different ideas on how to achieve this simplicity.
Simplicity and Tackling Complexity
WCF seems to favour heavy abstraction, complex runtime XML configuration to manage it and big tooling to provide end-to-end connectivity for developers. This abstraction covers a lot of complex technology: its abstracted artificial server-side object model is complex, its configuration is complex, WSDLs are complex, and SOAP / WS-* are complex. The result of which requires numerous text books on each of the subjects to have a good understanding of it.
There is a school of thought that believes the way to tackle complexity is to add higher-level abstractions. The problem with this approach are that it only works when the abstraction is perfect (e.g a programming language over machine code). If its not, the moment you run into unexpected behavior or configuration, integration and interoperability issues you need to understand what's happening under all those layers. Adding more abstractions in these cases actually increases the conceptual space developers have to be familiar with and the Cognitive Overhead they have to know when developing against the higher-level abstraction. This makes it harder to reason about your code-base as you need to understand all the layers beneath it. To further compound the problem, WCF's server-side object model is new and artificial, new developers aren't familiar with it as there's nothing else like it and it doesn't teach you anything about the services or HTTP domain. So developers don't gain any transfer of knowledge when they move onto the next-best services framework. i.e. the investment in text books required to learn how to use WCF are better spent learning HTTP + TCP/IP as any knowledge gained remains useful when moving onto other web service frameworks, independent of language or platform.
At the technical implementation heavy abstractions incurs un-necessary performance overheads that are harder to optimize since they obstruct and impose friction when trying to interact with low-level APIs. Our approach is to only add abstractions when they're absolutely necessary, as seen with our thin IHttpRequest and IHttpResponse wrappers necessary to provide a common API for ASP.NET and self-hosted HttpLister hosts. Rather than adding layers of abstraction we prefer adding features orthogonally by wrapping DRY functionality around the low-level interfaces. This has the added benefit of allowing end-users freedoms to do the same thing and add their own enhancements in user-space libraries, promoting a lean, DRY and readable code-base. Exposing low-level APIs ensures we maintain a flexible framework where users have complete control over system outputs with access to multiple ways to easily customize the returned response so they're never limited and can control every byte that's returned.
WCF adopts a unique approach to services in that it tries to force all supported endpoints to standardize on a single artificial abstraction model. This approach suffers the side-effects from the general principles of abstractions where a unified abstraction model now takes into account the complexities of everything it's trying to abstract, whilst only being able to support the intersection of features and lowest common denominator functionality available in each endpoint. This results into an incomplete surface API, with the abstraction itself obstructing access to the underlying endpoints making each less accessible and configurable.
Being able to pull what WCF has accomplished is a huge technical feat in itself, requiring a massive investment in technical resources. But it's essentially a wasted effort since it results in a less productive and usable framework compared with standard approaches taken on alternative platforms. WCF also requires a lot more investment of effort from end-users in order to gain a cursory level of experience. For these reasons I don't expect we'd ever see a WCF-like framework or its unified endpoint abstraction undertaken by anyone else. Due to its complexity and sheer size of its technical implementation I doubt it will evolve and adapt to support new developer paradigms or service patterns either. I suspect WCF will suffer the same fate as WebForms where it will be relegated to a deprecated technology, losing favor to Microsofts next replacement framework.
We still plan on adding more endpoints to ServiceStack in addition to the REST, SOAP, MQ and RCON endpoints already available. But by taking a convention-based approach we avoid requiring heavy configuration, our message-based design avoids big-tooling and simplifies the surface-area needed to be implemented, starting from ideal C# and projecting out allows new functionality to be automatically available to end-users, without additional effort. The absence of a unified abstraction model and our de-coupled architecture allows us to add new endpoints and features orthogonally without imposing undue complexity on existing components.
All-in-all we're able to deliver a full-featured, better-performaning, easier to understand services framework in a much leaner and nimbler code-base, that requires drastically less techical effort in order to implement and maintain.
XML Configuration is another idea embraced in WCF that we don't agree with, it inhibits testing and requires more effort to maintain. The only thing that should be in your applications config are parts of your application that are actually configurable. Code remains the superior way to define and compose your services dependencies and has the advantages of being debuggable and statically verifiable at compile-time. WCF requires a lot of it - it's easy to create complete applications in ServiceStack that's smaller than the size of the XML WCF Configuration required by a typical WCF Service. It ends up being simpler and cleaner if configuration happens at the IOC level - i.e. directly against the .NET Api of the features you want to enable.
The problem with big tooling is that it breaks down whenever you try to do something new with it, or something it wasn't intended for. In these cases you'll become a slave to the tooling, limited in being able to provide functionality that it supports. Big tooling also doesn't survive framework rewrites as they encourage heavy and complex code-bases that are hard to evolve and re-factor. I anticipate this is the reason why Microsoft is forced into contnually re-writing service frameworks instead of being able to evolve their existing ones, why WebApi isn't able to re-use MVC's existing abstractions or enable WCF's SOAP support despite adopting the same RPC API design and its self-hosting option already built on top of WCF.
Building on the lowest-level ASP.NET abstractions allows ServiceStack to provide great integration with ASP.NET MVC, where it's able to re-use the existing implementations for many components like Authentication filter attributes, Caching and Session providers easily in MVC. You can also easily call ServiceStack services from inside MVC with a little more than the cost of a C# method call.
Throughout the years ServiceStack has managed to continually evolve and outlasted many generations of service frameworks where services built 4 years ago implementing ServiceStack's IService still run today.
ServiceStack's take on Complexity
ServiceStack by contrast is a much simpler picture, e.g. ServiceStack's Architecture fits on 1 page. It only requires 1 line of Web.config to enable just to tell ASP.NET to forward all requests to ServiceStack.
Conventions and Artficial Complexity
How best to tackle complexity is probably the biggest philosophical difference between ServiceStack vs Microsoft's approach to developing libraries and frameworks. Microsoft prefers to be explicit, configurable via xml, introduces new heavy artificial abstractions (forward-looking to support all potential use-cases), relies on abstractions to expose cosmetically simpler user-facing facades, optimizes for designer-friendly tooling and novice developers. Their primary motivation seems to be to present developers a familiar programming model and leverages big tooling to fascilitate it. We see this with WebForms which provided WinForm developers an event-based programing model to develop websites and in WCF which lets developers create remote services using normal C# methods and interfaces. Their approaches generally requires significant technical effort to achieve resulting in large code-bases.
By contrast we view code-base size as code's worst enemy and avoid introducing Artificial complexity as a primary objective. i.e. we're vehemently opposed to introducing new concepts, abstractions or programming models. We instead prefer thin low-level interfaces mapping 1:1 with the underlying domain to maximize flexibility, reduce friction and minimize projection complexity when further customization is needed. DRY, high-level functionality is obtained with re-usable utils and extension methods. Adopting a code-first development model captures the essence of users intent in code enabling a more elegant non-compromized design, avoiding any projection complexity introduced when interfacing with designer tooling. All our libraries work with POCOs for maximum reusability and our built-in translators simplify effort in translating between domain-specific models.
We present users a best-practices message-based design, promoting the optimal approach for developing services from the start. Services are just normal C# classes, free of endpoint concerns and their dependencies auto-wired to drive users towards adopting good code-practices.
Conventions and reducing industrial knowledge are our best weapons to tackle complexity. Rather than being explicit it's better to provide a conventional default behavior that works as expected. This saves users from having to learn your frameworks APIs as expected standard behavior can be assumed and allows us to provide more functionality for free, like having all our built-in endpoints and formats automatically enabled out-of-the-box. You're also free to return any response from your services, which gets serialized into the Requested Content-Type.
Ultimately we believe simplicity is best achieved by avoiding complexity in the first place, by:
- Removing all moving parts and accumulated abstractions - Built on top of ASP.NET's raw IHttpHandlers
- Use convention over configuration - All Services implementing IService are automatically wired-up and made available
- Provide sensible defaults - ServiceStack just works out-of-the-box, no further configuration is needed
- Enable all features - All in-built formats and end-points enabled by default, pre-defined conventional routes automatically available. Use config to opt-out
- Automatic discovery - ServiceStack includes a /metadata page listing all services, the routes their available on, XSDs, WSDLs, etc
- Introduce no new artificial constructs - ASP.NET developers will be instantly familiar with how to customize the HTTP Request and Response
- Start with C# and Project out - Start with the ideal C# code and add orthoganal features so they're instantly available during normal development
- Opinionated towards productivity - Every Request DTO in ServiceStack must be unique, this lets you call any service with just the Request DTO Name and body
- Building functionality around POCOs - You can use the same POCO as a DTO in ServiceStack, OrmLite Data Model, Cache, Session, Config, etc
- Using a Message-based design - Binding to a single model is much easier to implement and rationale about than a RPC method signature
- Flexible - Provide a number of Custom Filters and Event Hooks to plug into and customize any stage of the Request and Response pipeline
Avoid big tooling and code-gen
We dislike code-gen as we believe it adds un-necessary friction to a project, we're also against the reliance on any big tooling in the core part of the development workflow.
Since the ServiceContract for your ServiceStack services is maintained in your Request and Response DTOs by design, we're able to provide a typed end-to-end API using nothing other than the types you used to build your services with and any of the generic and re-usable .NET Clients. This lets us provide the most succinct, typed, end-to-end API out of any service Framework in .NET.
Based on WCF's design and heavy reliance on Config, it doesn't appear that WCF was built with any testability in mind. It is however a core objective in ServiceStack, we provide an embedded (and overridable) IOC by default, encouraging good development practices from the start. Services need only implement an implementation-free IService marker interface or inheriting any of the convenient Service or ServiceBase classes - each of which are testable in isolation and completely mockable. The result of our efforts allows the same Unit Test to also serve as an XML, JSON, JSV, SOAP Integration Test. Our self-host HttpListeners makes it easy to perform In-Memory integration tests.
Performance is a primary objective in ServiceStack as we consider performance to be the most important feature. Our message-based design promotes fewer network calls and we take care to only expose user-facing APIs that are fast. Contributions are often rejected because they contain slow code. We're dilligent in our implementations, we don't use any runtime reflection or regular expressions preferring instead to use the faster alternative solutions.
Fastest Serialization Formats
Rich Caching Providers
Since Caching is necessary in order to create high-performance services, we include a rich Caching Provider Model with implementations for In-Memory, Redis, Memcached, Azure and Amazon back-ends. The Caching APIs persist the most optimal format, e.g. if the client supports it, we'll store the compressed JSON output in the cache that is written directly to the Response stream on subsequent requests enabling the fastest response times possible in managed code.
Fixing .NET's performance problems
Whenever we identify bottlenecks in Microsoft's libraries, we'll routinely replace them with faster performing alternatives. We've done this with our Json Serializer, pluggable Gzip and Deflate Compression libraries, as well as avoiding the degrading performance issue plaguing ASP.NET developers by provinding our own clean Session implementation that works with any of the above Caching providers.
Leading client for the fastest Distributed NoSQL DataStore
In our pursuit into technologies for developing high-performance services, we ended up developing and maintaining a .NET's leading C# Redis Client for the fastest distributed NoSQL data store - Redis.
What exactly is a message-based Web service?
In essence, a message-based service is one that passes messages to facilitate its communication. A good metaphor for illustrating the differences between a RPC method and a message-based API can be seen in Smalltalk or Objective-C's message dispatch mechanism vs a normal static C method call. Method invocations are coupled to instance they are invoked on, in a message-based system the request is captured in the message and sent to a receiver. The receiver doesn't have to handle the message as it can optionally delegate the request to an alternative receiver to process instead.
Message-based design is enabled in ServiceStack by capturing the Services Request Query into a Request DTO that's completely de-coupled from any one implementation. You can think of making a ServiceStack request as a Smalltalk runtime method dispatch at a Macro scale, where the ServiceStack host is the Receiver, the HTTP Verb is the selector and Request DTO is the message.
It doesn't matter on which of the endpoints the Request is sent to as the request can be populated with any combination of PathInfo, QueryString and Request Body. After the Request binding, the request travels through all user-defined filters and pre-processors for inspection where it can optionally be handled before it is able to reach the service implementation. Once the request reaches the service it invokes the best matching selector, by default it will look for a method with the same name as a HTTP Verb, if it doesn't exist it falls back to using a catch-all 'Any' method that can be used to handle the request on any endpoint or route in any format. Even when inside the Service implementation it is able to further delegate the request to an alternative service or easily proxy the request to a remote sharded instance if it needs to.
Conceptually in ServiceStack you're just sending a message to a ServiceStack instance, the client is not concerned with what ultimately handles it, only that a response is returned for reply requests or that the request is successfully accepted for oneway messages. In a RPC API you're conceptually invoking a remote method which has the request tightly coupled to its remote implementation method signature.
There are many natural benefits gained when adopting a message-based design they offer better resilience, flexibility, versionability then their RPC cousins. An example of one of the benefits possible is when you send a request to its one-way endpoint, if the ServiceStack instance has a MQ Host configured, the request is automatically deferred to the configured MQ Broker and processed in the background. So even if the ServiceStack Host goes down none of the pending messages are lost and are automatically processed the next time the Host starts up. This is the type of behavior that is enabled for free in ServiceStack. When no MQ host is enabled the request is just processed normally, i.e. synchronously by a HTTP web worker.
Most of the benefits of message-based designs are gained over time as you're developing and evolving your existing services and adding support for more clients. One immediate benefit is being able to provide an end-to-end typed API without the use of code-gen. This is impossible to achieve without a message-based design which ensures the essence of your Service Contract is captured in re-usable DTOs. Being able to share your server DTOs you defined your web services with on the client completely by-pass the normal development workflow required in re-generating your clients proxies from your services interim WSDL/XSD schemas.
Typed clients are the under-pinnings for Native SDK's which provide the most value to end-users of your service as they reduce the most of the burden required in order to consume your API. This approach is popular for companies that really, really want you to use their APIs, i.e. where their businesses success depends on its popular use. This is the preferred approach taken by Amazon EC2, Google App Engine, Azure, Facebook, Ebay, Stripe, Braintree, etc.
More importantly, message-based designs encourage the design of coarse-grained and more re-usable services. By contrast RPC method signatures are generally designed to serve a single-purpose, i.e. Rather than adding more RPC methods for every client requirement (which introduces a new external endpoint each time), message-based designs instead encourages enhancing existing services with extra functionality since they can be added without friction. This additionally has the benefit of providing instant utility to existing clients already consuming existing services, since they can easily access the extra features without introducing a new code-path to call a new external endpoint.
This is an especially important approach to take whenever implementing service-intensive systems like SOA platforms, as services routinely end up out-living and serving more clients that the original client that consume them, so it's important not to have your service APIs driven by adhoc client-specific requirements. It's more useful to think about designing APIs from the system's perspective with the goal of exposing the underlying systems capabilities in a generically re-usable API. This is the main reason why I now only ever adopt message-based designs for all my services endpoints as Coarse-grained APIs naturally encourage the design of more re-usable and feature-rich APIs.
Benefits of message-based designs are already well-known to developers of leading distributed frameworks who have adopted message-based designs in leading platforms, e.g: Google's Protocol Buffers, Amazon's Web Services platform, Erlang processes, F# mailboxes, Scala's Actors, Go's Channels, Dart's Isolates and Clojure's agents, etc.
You recently introduced a razor engine making ServiceStack a more complete web framework than just a web services framework - what was the motivation behind that?
We've always wanted to have a good HTML story with ServiceStack. From the point of view of a Services Framework, HTML is just another Content-Type although it does have the special property of being supported in all browsers making it the only format capable of rendering a universal UI that's supported on most computing devices. Since ServiceStack could be easily hosted along-side any ASP.NET or MVC Web Framework there wasn't an immediate need to support HTML as we were able to satisfy Single Page Apps by serving static assets and whenever we needed to dynamically generate content we could just leverage the hosting Web Framework. Although this worked well enough, we weren't completely satisfied. Our choices were to use WebForms which like WCF, we considered to be a leaky over-architected server-side abstraction or MVC which remains a comprehensive framework but due to its increasing complexity and additional moving parts added on each release, was frustrating trying to get working in Mono.
The eventual motivation for providing our own HTML Story was largely due to our ambitions of providing full-support for Mono so we could run on each of the exciting platforms Mono supports. That and our Self-Hosted HttpListener host was becoming more popular but being held back because it wasn't able generate dynamic HTML views as both WebForms and MVC required an ASP.NET host. So we decided we needed to provide support for an integrated HTML View Engine. Unfortunately the only View Engines available at the time was WebForms which we didn't like very much or Razor which was beautiful, but closed-source at the time and poorly documented. So we decided instead to create our own View Engine based on our 2 favourite markups: Markdown and Razor. ServiceStack is nicely arhictechted so we're able easily plug in new Content-Types without disrupting the other formats and endpoints. Two weeks of development later, Markdown Razor was born - a nice blend of Markdown (the ideal markup for content) and Razor - providing its dynamic functionality.
We were happy with the result as we we're now able to build rich ajax-powered, documentation-heavy websites like ServiceStack Docs using just ServiceStack and Markdown Razor. Markdown has the distinct advantage of being natively supported in GitHub, which let us import our existing GitHub pages as-is as well as live-edit and preview content directly from our public GitHub repository. This still wasn't a complete solution as although Markdown is perfect for content it's not ideal for precise HTML layout, the most ideal View Engine for this task that's familiar to most .NET developers was Razor, so once it became Open Source we jumped at the chance to integrate it. With the help of our good friends from NancyFx (another good alternative Web Framework) who told us how to get Razor support so that it works with VS.NET intell-sense, we were able to add Razor View Engine support to ServiceStack that now works equally well on all supported hosts and platforms.
But we didn't stop there, now that we had a complete stack and total control over the HTML rendering, we're able to introduce some of our own unique functionality that can be seen on our showcase site: razor.servicestack.net. Thanks to our virtual file system we're able to serve Razor views from sources other than the file-system, like embedded inside a .NET dll. Using the Self-Host, this lets you package your website, complete with Razor views inside a managed .NET .exe. Another unique feature we've introduced was being able to embed Partial views of other View Engines, this enables the ideal scenario of creating the structure of your page with Razor and HTML and maintain the Content of your page in Markdown which can easily be embedded as a Partial view.
We also think we've provided more appealing alternatives to some of MVC's features like our Cascading Layout templates as a simpler more intuitive way of maintaining multiple website layouts than MVC Areas and ServiceStack Bundler which is a faster, simpler and more flexible cross-platform node.js-based alternative to MVC Web Optimization.
Are there any scenarios where you think WCF/Web API/MVC might be better suited than ServiceStack?
If you believe in the value of following REST and HATEOAS Contraints in developing server-driven systems, you're going to want to use WebApi and join the developing culture in that community. If you would like maximum utility for your Services and be hosted in alternative endpoints like SOAP, MQ's (and upcoming TCP support), ServiceStack is the better choice.
If you're an MVP or Microsoft Gold Partner you're going to naturally want to stick with the MVC and Web Api technology stacks as Microsoft will have you covered with Sql Server, AppFabric all the way to Azure. We see more value and opportunity in supporting the better scaling and performing alternative platforms which is primarily where we'll be focusing our efforts on, providing a great story for Amazon's EC2 and Linux-only clouds like Google Compute Engine as well as the alternative RDBMS solutions OrmLite supports and high-performance NoSQL solutions with a continued heavy investment around Redis and integrated adapters for the Cloud-hosted data stores.
Microsoft has collaborated with open source projects in the past (JQuery, NuGet for e.g.) and MS folks like Scott Hanselmann seem quite open about adopting good open source solutions into the MS default stack wherever possible - do you foresee any such collaborations that could benefit the .NET community in general?
When it was launched, the NuGet project received criticism for their lack of collaboration with other pre-existing Open Source solutions. But overall I think NuGet has turned out to be Microsoft's most helpful contribution in reducing the effort to use Open Source libraries to date as it provides a window in VS.NET for developers to be able to easily reference external packages. We've definitely seen a greater adoption of ServiceStack's libraries since publishing to NuGet with more than 200k downloads in the last 18 months.
As for Open Source .NET libraries, Microsoft have only just adopted their first Open Source .NET library with JSON.NET earlier this year when it was bundled with Web Api. Like any company it makes sense for them to adopt superior Open Source alternatives when they exist, as all their previous JSON Serializer attempts to date haven't matched the features of JSON.NET or the performance of ServiceStack's Serializers. Although this provides a great boost for a single library like JSON.NET which has seen an order of magnitude more downloads than all other JSON Serializers combined, it doesn't have a halo-effect in helping the adoption of any other library - it actually has a negative impact. The power of defaults means .NET developers need a good reason to deviate and adopt an alternative. We're lucky to have a reason as .NET's fastest JSON Serializer making it popular with performance conscience companies like StackOverflow who've adopted it for their JSON needs. But we're a distant 2nd place with only 1/14 of JSON.NET's market share, the next Open Source JSON Serializer in line only has 1/110 of its market share. As serialization performance is critical in developing high-performance services we view our serializers as a core component and we're committed to maintaining the best and fastest serializers we can.
Other than JSON.NET I believe DotNetOpenAuth is the only other Open Source .NET library they've adopted since which again makes sense for any company to avoid un-necessarily re-inventing the wheel. So whilst they now seem to be open to adopting superior external Open Source libraries when it's in their interest to, this change in behavior hasn't itself provided any noticeable benefits to the overall community.
But thanks to their change in business models, Microsoft has begun to open up a lot more with most of their libraries and frameworks around Azure being Open Sourced. This is great as it lowers the barrier for everyone in adopting new software and provides the direct benefit in reducing the burden off the Mono Community who've had to previously expend efforts in duplicating functionality and are now able to re-use Microsoft's Open Source released software as-is and contribute back patches to improve its support of Mono. F# is a great example of this which as it's completely open sourced has surprisingly great support for Mono, in-fact all my forays into fsharp were done in Mono/OSX with Sublime.Text. One of Microsoft's Open Source releases has even created an active community in SignalR which has shot to the top of GitHub C#/.NET Charts and who's SignalR-powered JabbR.net chat has become a popular hangout for .NET developers.
Areas that haven't been as fortunate were alternative Open Source frameworks that were fast to adopt new approaches popularized on other platforms, filling the void before Microsoft introduces a competing solution. E.g. ASP.NET MVC was created years after the MVC MonoRail Framework which has since deflated that community. Their latest attempt at an ORM Data Access Layer in Entity Framework has negatively impacted NHibernate's once thriving community (the earlier prominent Open Source ORM). Despite being several times slower than every other Open Source .NET ORM EF has still succeeded in attracting more downloads than all other ORMs combined. Likewise Microsoft seem to be in a perpetual state of creating and releasing new service frameworks, of which many have come and gone including .asmx, CSF, WCF, WCF/REST, WSE, WCF DataServices, WCF RIA Services and now Web Api. During the years between releases many alternative Open Source service frameworks have provided more than capable alternatives, that would've been more suitable options for various use-cases if it weren't for their existence remaining largely unknown to the wider .NET community. Even to this day most .NET developers still aren't aware that suitable Open Source alternatives exist, despite many of them having delivered solutions for more than 4 years.
The .NET platform is like no other, Microsoft has PR channels, Evangelists, MVP Award programs and control over VS.NET that commands a strong influence over .NET mindshare where they're seen as the authoratative voice for the .NET ecosystem to most .NET developers. Historically they've only used their influence to validate their own libraries and frameworks which has contributed to many .NET companies being reluctant to deviate from Microsoft's prescribed technology stacks and explore alternative solutions. We've experienced this ourselves on a number of occasions where developers have wanted to use ServiceStack but were unable to convince their company to adopt an alternative framework when a Microsoft solution exists, despite presenting favourable examples and benchmarks. I expect many other alternative Open Source libraries and frameworks to have sufferred similar fates.
This environment makes it hard for Open Source communities to thrive and C#/.NET has seen its relative popularity decline, recently slipping off GitHub's top 10 languages list (the current home of Open Source) where there currently exists sparingly few Open Source .NET projects that have rallied this trend and have been able to form communities around their independent technologies. The Mono project is by far the brightest spark with the most healthy and talented developer community that is providing a lot of value in enabling .NET applications to run on popular alternative platforms like iOS, Android, Linux and OSX. For many projects, supporting Mono is providing the best opportunity to expand their reach which has been one of our primary motivations for always ensuring first-class support of ServiceStack on Mono. Behind Mono, NancyFx and ServiceStack are doing their best to grow the Open Source .NET community attracting more than 200 unique contributors between them - many of whom were first-time contributors to Open Source. MonoGame and RavenDB are two notable others also gaining popularity. We're definitely proud to be amongst the top GitHub projects encouraging Open Source in .NET, but we still have a desire to see more engaging .NET communities and more developers encouraged to pursue the Open Source development model and participate in joining existing and forming new communities.
Raising awareness for alternative libraries and frameworks is where I believe some collaboration from Microsoft would provide the most benefits. This is a situation where trying to grow a bigger Open Source .NET community would provide more benefits then trying to take a bigger slice. Up to this point Scott Hanselman has probably single-handedly provided the most awareness by covering various Open Source libraries and frameworks on his popular personal blog. After Scott, Glenn Block has helped in promoting various frameworks from his very active @gblock twitter account. More formal efforts from Microsoft in raising awareness and validating the various projects will amongst other things encourage more .NET developers to get into Open Source and give existing developers enough momentum to continue to enhance their own projects - both are vital ingredients in sustaining a thriving Open Source community.
Being a profit-motivated company Microsoft would need some financial incentives to encourage promoting alternative libraries and frameworks. I would hope that someone could build a business case around how building a thriving Open Source .NET community would encourage more developers to choose .NET which in-turns creates a larger potential customer base for their server tools and Azure services. Even if they only promote alternative .NET frameworks in the context of running on Azure like they do with node.js, python and java that would still yield an improvement. To maximize its potential growth Open Source .NET would need buy-in from Microsoft where expanding Open Source .NET communities is seen as a desirable metric to strive for, at which point they would recognize Open Source contributions in their MVP award programs and promote them in their PR channels. Failing any future Microsoft involvement, I believe the Mono project holds the best hope for encouraging more .NET developers to join Open Source.
You made a comment recently on one of the forums - "I'm hoping next year to be even better, as I have a few things planned that should pull us away from the other frameworks" - would you elaborate what features you have in mind for the future?
Well I was being intentionally vague in the forums as we want to save some features for surprise announcements as we believe we have a unique opportunity to deliver some news worthy products and features in the near future. But the general theme is to remain a value-focused services framework and implement the remaining useful bits in WCF to provide a more appealing upgrade path to ServiceStack. Some publically known features we've mentioned we have planned for next year, include:
- Merging the Async branch and its async pipeline
- Create new fast Async TCP Endpoints
- Enable fast, native adapters for node.js and Dart server processes
- Enable integration with more MQ Endpoints (i.e. RabbitMQ and ZeroMQ)
- VS.NET Integration and our improved solution for WCF's 'Add Service Reference'
- Integrated Development workflow and story for Mono/Linux
- Enable automated deployment story to Amazon EC2 and Google Compute Engine clouds
- Signed and Long-term stable commercial packages
- Starter templates around popular Single Page App Stacks: Backbone.js, AngularJS, Yeoman and Dart
- Starter templates for creating CRM and SharePoint-enabled support systems
- Website re-design and improved documentation