jump to navigation

SQL Server 2005 SP1 won’t work with Vista October 31, 2006

Posted by Patricio in Software Engineering.
add a comment

It’s no secret that a number of applications, including several of Microsoft’s own, are not going to work properly with Windows Vista when the product ships.

A new one to add to the app-compat alert list: SQL Server 2005 Service Pack (SP1), which Microsoft made available for download in April 2006.

(The older SQL Server 2000 and SQL Server 7.0 releases won’t be supported for Vista or Longhorn Server, either.)

Microsoft has begun advising customers that they will need SQL Server 2005 SP2 in order to maintain Vista and Longhorn Server compatibility. SQL Server 2005 SP2 is not yet out. In fact, it’s not even available in alpha or beta form yet. According to a blog post on October 30 from Microsoft consultant Benjamin Jones, the first Community Technology Preview (CTP) build of SP2 is due out sometime soon.

Earlier this summer, Microsoft officials warned customers and partners that MSDE – the free, lightweight version of Microsoft’s SQL Server engine — would not work on Vista.

Microsoft officials acknowledged last month that Visual Studio 2005 SP1 would not work from the get-go with Vista. Visual Studio 2005 SP1 is expected to ship by late 2006/early 2007. The beta program for the service pack ended on October 30.

By the way, speaking of SQL Server, Microsoft has decided to rename again the mobile version of its SQL Server database (that also works on desktop systems, by the way). The product formerly known as SQL Server Everywhere is now known as SQL Server Compact Edition. If that sounds familiar, it should; Microsoft has decided to revert to the original name of the product.

Microsoft Pre-release Software Visual Studio Code Name “Orcas” – October Community Technology Preview (CTP) October 31, 2006

Posted by Patricio in Orcas, Software Engineering.
add a comment

Visual Studio Code Name “Orcas” delivers on Microsoft’s vision of smart client applications by enabling developers to rapidly create connected applications that deliver the highest quality rich user experiences. This new version enables any size organization to rapidly create more secure, manageable, and more reliable applications that take advantage of Windows Vista and the 2007 Office System. By building these new types of applications, organizations will find it easier than ever before to capture and analyze information so that they can make effective business decisions.

This download is the October 2006 Community Technology Preview of Microsoft Visual Studio Code-Named “Orcas”.

Note: This CTP is available only as a Virtual PC image. You will need Virtual PC or Virtual Server to run this image. Depending on your hardware the download files make take between 30-60 minutes to decompress.

This CTP targets early adopters of the Microsoft technology, platform, and tools offerings. It enables developers to experience the upcoming toolset and underlying platform improvements. We designed this release to enable developers try out new technology and product changes, but not to build production systems. This limitation is fully covered in the EULA that accompanies this CTP.

The highlights of this CTP include:

  • ADO.NET 3.0 Advancements
    • Enhanced the existing .Net Data Provider to work with the new features in ADO.Net 3.0 such as LINQ and object services
    • Database and application object isolation assists in minimizing the impact of database schema changes in existing applications
    • Developers can build can create scripts as actual programs –instead of VBS scripts- that are still completely self-contained in a single file and can be trivially modified, compiled and executed in any environment that has .NET installed.
    • eSQL language support enables developers to build applications that provide users with an ad-hoc query capability.
  • LINQ over XML (XLinq)
    • Core functionality of the XLinq API such as load, modify, and save XML documents
    • Annotation support with a lightweight, typed, but general purpose annotation mechanism that can be used to associate information such as line numbers, schema types, and application objects with specific nodes in an XLinq tree.
  • Multi-targeting
    • Support multitargeting within the IDE by enabling Visual Studio to leverage MSBuild using the tasks and targets that were shipped in Visual Studio 2005. Additionally, command line solutions will build using the toolset appropriate for the .NET Framework version that is being targeted.
  • Improved 64bit application working set
    • On 64bit systems better code layout in system assemblies will result in improved working set.
  • Lightweight reader/writer lock with deadlock-free upgrade support.
    • The new System.Threading.ReaderWriterLockSlim class supports basic read and write locks, allowing for better scalability for read-only concurrent worker scenarios. As its name implies, this lock performs anywhere from 2x to 5x better than the existing ReaderWriterLock class, and scales better on multi-processor and multi-core machines. This type also supports upgradeable-read support: if code needs to inspect some state before deciding to acquire the write-lock, upgradeable-reads allow concurrency-safe reading with an optional deadlock-free upgrade to write. Recursion is also disabled by default, helping to write correct code, with an optional recursive mode turned on at lock instantiation time.
  • A high performance trace listener which logs XML to disk in the event schema.
    • The System.Diagnostics.EventSchemaTraceListener is the first listener in the namespace which is highly tuned for logging performance. Similar to the XMLWriterTraceListender, this trace listener logs XML to disk. In particular, this type logs in the event schema, which is shared by some other new technologies. This tracelistener has performance which is drastically improved over previous logging tracelisteners, especially on machines with multiple processors. Additionally, this is the first trace listener which allows many different disk logging options, such as circular logging across multiple files
  • Getting VSTO and/or controls off machine policy/legacy policy migration
    • Developers of managed browser controls can now create manifests for their controls and Authenticode sign the manifests. An enterprise can then choose to trust the controls by manifest signature, rather than modifying CAS policy. This provides a bridge from the CAS policy trust model to the trusted publisher model in Orcas.
  • Security Platform Parity – Suite B support: AES
    • Cryptography developers can now use the FIPS-certified implementations of advanced SHA hashing algorithms and AES encryption algorithm in managed code. These classes follow the same familiar patterns as the existing cryptography algorithms, making it easy for developers to use the new classes right away.
  • A new date time data structure that can specify an exact point in time relative to the UTC time zone.
    • The current DateTime is insufficient at specifying an exact point in time. DateTimeOffset represents a date time with an offset. It is not meant to be a replacement for DateTime; it should be used in scenarios involving absolute points in time. DateTimeOffset includes most of the functionality of the current DateTime API and allows seamless conversion to DateTime as well.
  • New IO types that expose almost all pipe functionality provided by Windows.
    • Pipes can be used to achieve inter-process communication (IPC) between any process running on the same machine, or on any other windows machine within a network. We’ve added managed support for both anonymous pipes and named pipes. Anyone familiar with streams should be comfortable using these new APIs to achieve IPC.
  • A new high performance set collection.
    • HashSet is a new generic collection that has been added to the System.Collections.Generic namespace. It is an unordered collection that contains unique elements. In addition to the standard collection operations, HashSet provides standard set operations such as union, intersection, and symmetric difference.

An Introduction to Team Foundation Server Version Control from a Visual SourceSafe User’s Perspective October 30, 2006

Posted by Patricio in Software Engineering, Visual Studio Team System.
1 comment so far

Steven St. Jean has just posted a great document that helps users make the move to Team Foundation Server’s version control.  While his goal is to help VSS users, this document will help users moving from a lot of other source control systems as well, such as Star Team.  He includes lots of screen shots.

Microsoft Visual Studio 2005 IDE Enhancements October 30, 2006

Posted by Patricio in Software Engineering.
add a comment

Visual Studio 2005 IDE Enhancements are a set of Visual Studio extensions that are designed to make you more productive. These enhancements are directly integrated into the Visual Studio IDE. This set of enhancements includes Source Code Outliner, Visual C++ Code Snippets, Indexed Find, Super Diff and Event Toaster tools. All these tools except the IDE Event Toaster can be invoked from Visual Studio’s View.OtherWindows menu group. The Event Toaster tool can be configured from the Tools Options dialog under the PowerToys node. The Visual C++ Code Snippets can be invoked on any C++ source file. Previously, these enhancements were only available via the Visual Studio 2005 SDK. This installation does not require Visual Studio 2005 SDK.

Source Code Outliner : The Source Outliner tool is a Visual Studio extension that provides a tree view of your source code’s types and members and lets you quickly navigate to them inside the editor.

Visual C++ Code Snippets:The Visual C++ Code Snippets tool lets you insert snippets in your code by using a pop-up menu that contains programming keywords. VB.NET and C# languages have this functionality in Visual Studio 2005.

Indexed Find : The Indexed Find tool is a Visual Studio extension that uses the Microsoft Indexing Service to provide improved Search capabilities to the integrated development environment (IDE). It sends the results of a search to the Output Window.

Super Diff Utility: The Super Diff Find tool is a Visual Studio extension that compares text files. It uses color coding and graphics to show the difference between the files in deleted text (red), changed text (blue), inserted text (green).

Event Toaster Utility: The Event Toaster tool is a Visual Studio extension that notifies users about specific events within the Visual Studio IDE.

Windows Vista Developer Story October 30, 2006

Posted by Patricio in Software Engineering.
add a comment

The Microsoft Windows Vista Developer Story includes content for developers, and other technology experts and managers, interested in an in-depth exploration of some of the new and extended features in Windows Vista.

The Windows Vista Developer Story is released to the Windows Vista Developer Center (http://msdn.microsoft.com/windowsvista/) site in the form of Articles, published approximately one every two weeks. Those Articles are only a summary of the Windows Help file, which can be downloaded here.

The following list details the published (linked) and planned (coming soon) Articles:

Fundamentals

Presentation

  • Aero, Windows Presentation Foundation
  • ASP.NET, Windows Graphics Foundation

Communication

Data

Mobility

Interoperability & Migration

Media

The .NET Show: Windows Vista Readiness October 25, 2006

Posted by Patricio in Software Engineering.
add a comment

Manmeet Bawa and Doug Wood describe the issues that ISVs need to address when preparing their apps for Windows Vista.

Mark Taylor walks us through some best practices for developers to follow for their Windows Vista apps.

download available

The Visual Studio Team System – Project Server Connector! October 25, 2006

Posted by Patricio in Software Engineering, Visual Studio Team System.
add a comment

The Connector can be downloaded here: http://www.avanadeadvisor.com/TFS-ProjectServerConnector.zip. The Connector is largely based on the sample Project Server 2003 – Visual Studio Team Foundation Server Beta 2 Connector available on GotDotNet. The Connector is also available as a part of the Avanade Software Lifecycle Platform™.

You can learn more about it here: http://msdn.microsoft.com/vstudio/why/avanade/default.aspx.

GotDotNet

David Anderson: Thoughts on Visual Studio Team System and “Dark Matter” Iteration Forcasting October 23, 2006

Posted by Patricio in Software Engineering, Visual Studio Team System.
add a comment

During his time at Microsoft, David was an architect for the Microsoft Solutions Framework (MSF) in the patterns and practices group. David talks about using Visual Studio Team System to manage software projects and how to interpret the various reports which Team Foundation Server generates. David also discusses an iteration forecasting concept which he calls “dark matter.” Brian Keller conducts the interview.

From channel 9.

Note: David’s slides which he referenced during this interview can be downloaded from his blog.

Software-Development Methodologies and Visual Studio Team System October 23, 2006

Posted by Patricio in Software Engineering, Visual Studio Team System.
add a comment

There is a diverse set of methodologies for different types of software-development life cycles. To implement these methodologies effectively and consistently, it is important to have life-cycle tools that automate the processes and artifacts of the methodologies. Microsoft Visual Studio Team System (VSTS) provides a compelling solution for methodology management and automation.

link

by Sanjay Narang

Visual Studio 2005 Virtual Labs October 23, 2006

Posted by Patricio in Software Engineering.
add a comment

Patterns & practices Guidance Explorer Beta 2! October 20, 2006

Posted by Patricio in Software Engineering, Software Factories.
add a comment

Guidance Explorer Beta 2 now connects to an online guidance store! Source code is also available. Guidance Explorer is a tool that enables discovery, composition and consumption of high quality development guidance.

Usage Scenarios

  • Find relevant patterns & practices guidance
  • Build customized checklists for your development scenarios
  • Build customized guidelines for your development scenarios
  • You can build custom sets of guidance and share with your team as recommended practice.

For more information see the Overview PowerPoint Slides or the Slide Index.

CommSee Project – Commonwealth Bank of Australia October 20, 2006

Posted by Patricio in Agile, Software Engineering, WCF.
4 comments

CommSee is a simply amazing application written by the team at CBA. This application is visually stunning, architecturally interesting and on top of that it has delivered solid business value to the bank stakeholders, users and customers.

Title Length Size Link
CommSee Architecture Overview
Overview of the CommSee project and architecture featuring

Stuart Johnson

General Manager, Integration and Service Oriented Architecture

Jon Waldron

Database Architect

Edward Gallimore

Architect

Dan Green

Architect

26:06 206MB wmv

October 19, 2006

Posted by Patricio in Software Engineering.
1 comment so far


Visual Studio for Database Professionals and other Cool Data Management Tools for .NET October 19, 2006

Posted by Patricio in Software Engineering, Visual Studio Team System.
1 comment so far

VS for Database Professionals has been getting rave reviews, and includes support for database refactorings, schema and data comparisons, database unit testing, and automated data generation.  You can learn more about it on its MSDN dev-center and Community page.  You can also watch a nice Channel9 video with the team here.

MassDataHandler – A free CodePlex project that provides a utility library to help automate data generation for unit testing (it can be used within any unit test framework).

Data Dictionary Creator — A free tool that helps you document SQL Server databases, and helps you keep your documentation in sync with schema changes.

Exploring the new Domain-Specific Language (DSL) Tools with Stuart Kent October 19, 2006

Posted by Patricio in Software Engineering, Software Factories.
add a comment

Domain-Specific Language Tools allow Visual Studio 2005 developers to create their own graphical designers and code generation tools like the ones you find in Visual Studio today, such as the Class Designer. In this interview Brian Keller chats with Stuart Kent, a senior program manager on the Visual Studio Team System team, who gives us a tour of the DSL tools and creates an example DSL from scratch.

Visual Studio 2005 Team Edition for Database Professionals CTP6 October 18, 2006

Posted by Patricio in Software Engineering.
2 comments

Community Technology Preview (CTP) 6
Tools for building SQL databases in a managed project environment with support for versioning, deployment, unit testing, refactoring, and off-line SQL development.

See Cameron’s post (CTP6 is LIVE!) for the details.

SQL Server Database Publishing Wizard Community Technology Preview 1 October 16, 2006

Posted by Patricio in Software Engineering.
add a comment

The SQL Server Database Publishing Wizard provides a way to publish databases to T-SQL scripts for later use.

SQL Server Database Publishing Wizard enables the deployment of SQL Server 2005 databases into a hosted environment on either a SQL Server 2000 or 2005 server. It generates a single SQL script file which can be used to recreate a database (both schema and data) in a shared hosting environment where the only connectivity to a server is through a web-based control panel with a script execution window.

Free E-Learning: Developing Rich Experiences with Microsoft® .NET Framework 3.0 and Visual Studio® 2005 October 15, 2006

Posted by Patricio in Software Engineering, WCF, WF, WPF.
add a comment

This collection of 3 2-hour premium clinics teaches about the new capabilities provided by the .NET Framework 3.0. These clinics are for experienced Developers and Software Architects who are looking to adopt Microsoft’s next generation technology within their solutions.

Topics covered within the collection include:

  • Windows Presentation Foundation
  • Windows Workflow Foundation
  • Windows Communication Foundation

Adding a manifest to a Vista application October 13, 2006

Posted by Patricio in Software Engineering.
8 comments

Under Vista, an application can have a manifest that identifies the privilege level it needs to run. These manifests can serve other purposes, too: they’re also known as fusion manifests and can be used to identify dependencies among other things. Adding one to your application starts with adding a file to your project (right click and choose Add, New Item: depending on the language you’re using you might be able to choose XML file or else Text file will do.) Then you put appropriate XML in it, like this:

<?xml version="1.0" encoding="utf-8" ?>

<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">

  <assemblyIdentity version="1.0.0.0" 

       processorArchitecture="X86"

       name="Sample"

       type="win32" />

  <description>Sample Manifest Test Application</description>

  <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">

    <security>

      <requestedPrivileges>

        <!-- <requestedExecutionLevel level="requireAdministrator" /> --> 

        <requestedExecutionLevel level="asInvoker" />

        <!-- <requestedExecutionLevel level="highestAvailable" /> -->

      </requestedPrivileges>

    </security>

  </trustInfo>

</assembly>

Links:

Semantic coupling October 12, 2006

Posted by Patricio in Software Engineering, WCF.
add a comment

For better or worse, SOA (service-oriented architecture) continues to be the current industry fad. As SOA continues along the “hype curve” (a term I’m borrowing from Gartner), more and more people are starting to realize that SOA isn’t a silver bullet, and that it doesn’t actually replace n-tier client/server or object-orientation.

 

What will most likely happen over the next couple years, is that SOA will fall into the “pit of disillusionment” (part of the hype curve, that I think of as the “pit of despair”), and many people will decide, as a result, that it is totally useless. This will happen, not in small part, because some organizations are investing way too much money into SOA now, when it is overly hyped – and they’ll feel betrayed when “reality” sets in.

 

After a period of disrepute, SOA may then rise to a “plateau of productivity”, where it will finally be used to solve the problems it is actually good at solving.

 

Some technologies don’t live through the “despair” part of the process. Sometimes the harsh light of reality is too bright, and the technology can’t hold up. Other times, a competing technology or concept hits the top of its hype curve, derailing a previous technology. Over the next very few years, we’ll see if SOA holds up to the despair or not.

 

This is a pattern Gartner has observed for virtually all technologies over many, many years. If you think about any technology introduced over the past 20 years or more, almost all of them have following this pattern: over-hyping, over-reacting-to-reality and finally used-as-a-real-solution.

 

My colleague and mentor, David Chappell, recently blogged about some of the realities people are discovering as they actually move beyond the hype and try to apply SOA. It turns out, not surprisingly, that achieving real benefits in terms of reuse is much harder than the SOA evangelists would have anyone believe.

 

I think this is because SOA focuses on only one part of the problem: syntactic coupling. SOA, or at least service-oriented design and programming, is very much centered around rules for addressing and binding to services, and around clear definition of syntactic contracts for the API and message data sent to and from services.

 

And that’s all good! Minimizing coupling at the syntactic level is absolutely critical, and SOA has moved us forward in this space, picking up where EAI (enterprise application integration) left off in the 90’s.

 

Unfortunately, syntactic coupling is the easy part. Semantic coupling is the harder part of the problem, and SOA does little or nothing to address this challenging issue.

 

Semantic coupling refers to the behavioral dependencies between components or services. There’s actual meaning to the interaction between a consumer and a service.

 

Every service implements some tangible behavior. A consumer calls the service, thus becoming coupled to that service, at both a syntactic and semantic level. At the syntactic level, the consumer must use the address, binding and contract defined by the service – all of which are forms of coupling. But the consumer also expects some specific behavior from the service – which is a form of semantic coupling.

 

And this is where things get very complex. The broader the expected behavior, the tighter the coupling.

 

As an example, a service that does something trivial, like adding two numbers, is relatively easy to replace with an equivalent. Such a service can even be enhanced to support other numeric data types with virtually no chance of breaking existing consumers. So the semantic coupling between a consumer and such a service is relatively light.

 

Another example is credit card verification. Obviously the internal implementation of this behavior is much more complex, but the external expectations of behavior remain very limited. Like adding two numbers, verifying a credit card is a behavior that accepts very little data, and returns a very simple result (yes/no).

 

Contrast this with many other possible business services, such as shipping an order, or generating manufacturing documentation. In these (quite common) scenarios, the service performs, or is expected to perform, a relatively broad set of behaviors. The result is a whole group of effects and side-effects – all of which should be considered as black-box effects by any caller. But the more a service does, the less “black-box” it can be to its callers, and the tighter the coupling.

 

And this leaves us in a serious quandary. There’s a high cost to calling a service. There’s a lot of overhead to creating a message, serializing it into text (XML), routing it through some communications stack onto the wire, getting the electrons across the wire through some protocol (probably TCP) and all the attendant hardware involved, picking it up off the wire on the server, routing it through another communications stack, deserializing the text (XML) back into a meaningful message and finally interpreting the message. Only then can the service actually act on the message to do real work.

 

Worse, that’s only half the story, because most people are creating synchronous request/response services, and so that whole overhead cost must be paid again to get the result back to the caller!

 

Before going further, let me expand on this “overhead cost” concept to be more precise.

 

I worked for many years in manufacturing. In that industry there’s the concept of cost accounting – people make their living at tracking costs. They divide costs into overhead, setup and run (there are other models, but this one’s pretty standard).

 

To make this somewhat more clear, I’ll use the metaphor of baking cookies.

 

Overhead cost are all the salaried people, the buildings, equipment and so forth. Costs that are paid whether widgets are manufactured or not. When baking cookies, this is the cost of having a kitchen, a stove, electricity, natural gas, and of course the person doing the baking. In most homes these costs exist regardless of whether cookies are baked or not.

 

Setup costs are applied overhead. They are costs that are required to build a set of widgets, but they are only incurred when widgets are being manufactured. These costs include setting up machines, programming devices, getting organized, printing documents, etc. When baking cookies, this is the cost (in terms of time) of getting out the various ingredients, bowls, spoons and other implements. It is also the cost of cleaning up after the baking is done – all the washing, drying and putting-away-of-implements that follows. These costs are directly applied to the process, but are pretty much the same whether you bake one dozen or ten dozen cookies.

 

Run costs are those costs that are incurred on a per-widget basis to make a widget. This includes the hourly rate of the workers manning the assembly line, the materials that go into the widget and so forth. When baking cookies, this is the time spent by the baker, the cost of the flour, eggs and other ingredients consumed in the process. Ideally it would include the amount of electricity or natural gas used to run the stove as well. Obviously detailed run costs can be hard to determine in some cases!

 

When calculating the cost of your cookies, each of these three costs is added together. The run rate is easy, as it is per-cookie by definition. The setup rate is variable – the more cookies you make in a batch the lower the relative setup cost, and the fewer cookies the higher the relative setup cost. Overhead is typically aggregated – the annual overhead cost is known, and is divided by the number of cookies (and other things) made over a year’s time. Obviously there’s lots of wiggle room in this last number.

 

For my purposes, in discussing services, the overhead rate isn’t all that meaningful. In our industry this is the cost of the IT staff, the servers, the server room, electricity and cooling and so forth.

 

But the setup rate and run rate become very meaningful when talking about services.

 

Calling a service, as I noted earlier, incurs a lot of overhead. This overhead is relatively constant: you pay about the same whether you send 1 byte or 1024 bytes to or from the service.

 

The run rate is the actual work done by the service. Once the message is parsed and available to the service, then the service does real, valuable work. This is the run rate for the service.

 

In manufacturing it is always important to manage the overhead and setup costs – they are a “pure cost”. The run rate cost must also be managed, but it is directly applicable to a product, and so that cost can be factored into the price you charge. Perhaps more importantly, your competitors typically have a comparable run rate (materials and labor cost about the same), but the overhead can vary radically.

 

To switch industries just a bit, this is why Walmart does so well (and is so feared). They have managed their overhead and setup costs to such a degree that they actually do focus on reducing their run rate (in their case, the per-unit acquisition cost of items).

 

Coming back to services, we face the same issue. Typically we deal with this using intuition rather than thinking it through, but the core problem is very tangible.

 

Would you call a service to add two numbers? Of course not! The setup/overhead cost would outweigh the run cost to such a degree that this makes no sense at all.

 

Would you call a service to ship an order, with all the surrounding activities that implies? This makes much more sense. The setup/overhead cost becomes trivial when compared to the run cost for such a service.

 

And yet coupling has the opposite effect. Which of those services can be more loosely coupled? The addition service of course, because it performs a very narrow, discrete, composable behavior.

 

Do you even know what the ship-an-order service might do? Of course not, it is too big and vague. Will it trigger invoicing? Will it contact the customer? Will it print pick lists for inventory? Will it update the customer’s sales history?

 

I would hope it does all these things, but very few of us would be willing to blindly assume it does them. And so we are forced to treat ship-an-order as something other than a black box. At best it is gray, but probably downright clear. We’ll require that the service’s actual behaviors be documented. And then we’ll fill in the gaps for what it does not provide, or doesn’t provide in a way we like.

 

(Or, failing to get adequate documentation, we’ll experiment with the service, probing to find its effects and side-effects and limitations. And then we’ll fill in the gaps for the bits we don’t like. Sadly, this is the more common scenario…)

 

At this point we (the caller of the service) have become so coupled to the service, that any change to the service will almost certainly require a change to our code. And at this point we’ve lost the primary goal/benefit of SOA.

 

Why? How can this be, when we’re using all the blessed standards for SOA communication? Maybe we’re even using an Enterprise Service Bus, or Biztalk Server or whatever the latest cool technology might be. And yet this coupling occurs!

 

This is because I am describing semantic coupling. Yes, all the cool, whiz-bang SOA technologies help solve the syntactic coupling issues. But without a solution to the semantic, or behavioral, coupling it really doesn’t get us very far…

 

What’s even scarier, is that the vision of the future portrayed by the SOA evangelists is one where we build services (systems) that aggregate other services together to provide higher-level functionality. Like assembling simple blocks into more complex creations, that in turn can be assembled into more complex creations or used as-is.

 

Except that each level of aggregation creates a service that provides broader behaviors – and by extension tighter coupling to any callers (though the setup vs run costs become more and more favorable at the top level).

 

To bring this (rather long) post to a close, I want to return to the beginning. SOA is heading down the steep slope into the pit of disillusionment. You can head this off for yourself and your organization by realizing ahead of time, right now, that SOA only addresses syntactic issues. You must address the much harder semantic issues yourself.

 

And the tools exist. They have for a long time. Good procedural design, use of flow charts, data flow diagrams, control diagrams, state diagrams: these are all very valid tools that can help you manage the semantic coupling. Unfortunately the majority of people with expertise in these tools are nearing retirement (or have retired) – but the tools and techniques are there if you can find some old, dusty books on procedural design. Just remember to include the setup/overhead cost vs run cost in your decisions on whether to make each procedure into a “service”.

 

SOA solves some serious and important issues, but it is overhyped. Fortunately the hype is fading, and so we can look forward (perhaps 18 or 36 months) to a time when we can, with any luck, start focusing on the “next big thing”. Maybe, just maybe, that “big thing” will be some new and interesting way of addressing semantic coupling.

by Rockford Lhotka