The Wayback Machine - http://web.archive.org/web/20040401132218/http://www.looselycoupled.com:80/blog/
to LooselyCoupled.com homepage
 
 Weekly emails: how to advanced search
 
 Glossary lookup:
 
White paper 75k
Web Services Infrastructure: The global utility for real-time business — February 2002 (updated Oct), 20 pages, free download (PDF, 76k)

Loosely Coupled weblog


Wednesday, March 31, 2004

Widget syndication

One or two examples are emerging of the use of web services to call up bits of functionality on demand. There's a review this week in Network Computing of DreamFactory Software's rich client plug-in for XML web services: DreamFactory 6.0 Ends Web App Development Nightmares. An application developed with DreamFactory runs in a browser, using plug-in code that's automatically downloaded direct from the vendor's website, along with application-specific code that's stored on the application owner's server (or, for individual use, on their own personal machine). Application building uses DreamFactory's hosted development environment, and is designed to be easy enough for content owners to handle rather than having to rely on highly skilled programmers.

For a simple demonstration of DreamFactory in action, take a look at John McDowell's blog about Embedding Visualization into your own page, and preceding entries. John is CTO of Grand Central, which uses DreamFactory to offer application-building capabilities to its customers. In his blog, he offers a linkmap visualization tool that he's built using DreamFactory, and he shows how to embed it any web page using just a few lines of HTML. "One of the interesting attributes of creating well defined services is how easy it is to distribute the display code and have it completely separate from the business logic," he comments. "Anyone can add this code to their blog and provide a new metaphor for navigating their neighbourhood. As I add more features they will be delivered on demand i.e. no need to update software everything is delivered out of the network."

So here we have the environment provided as a service by DreamFactory; the specific functionality provided as a service by the developer (in this case John); customization of the functionality performed by whoever embeds the code in their web page; and the raw data provided from the target URL. This of course is a very simple demonstration in terms of the input data; DreamFactory is really targeted at working with XML and SOAP web services, so for example it could be used to aggregate multiple web services or even, with additional coding, to join them into a composite process.

The DreamFactory FAQ page has some succinct statements that bring out the simple elegance and flexibility of this very loosely coupled approach to adding flexible functionality to web services resources: "Simply install the product and start using publicly available web services or XML documents ... There is no source code. There is no deployment. Just go to a URL, build the application, and save your changes for worldwide distribution."

DreamFactory is going to be increasingly useful as more and more services emerge that users will want to couple together. It's no wonder that Grand Central as well as salesforce.com have become early partners of DreamFactory, since they're already in the business of hosted application services. But they're not isolated examples, and one of the most interesting trends going on at the moment is the emergence of APIs and XML web services as a kind of mass market phenomenon in the world of weblogs. We're already seeing growing mainstream adoption of RSS feeds for content syndication (CMP became the latest tech news publisher to bow to the inevitable and introduce RSS feeds, including one specific to the topic of web services, which we've added to the Loosely Coupled news headlines page).

Less noticed, but just as significant, is the emergence of widget syndication: the unbundling of functional services to make them available using web services APIs. An interesting example to watch will be leading weblog software vendor Six Apart's planned TypeKey service, as reported by internetnews.com in Six Apart Trains Guns on 'Comment Spam'. This is a much-needed comments capability for weblogs, which "would also be available for competing blog vendors" as well as authors of other third-party commercial applications.

I find it very encouraging to see more and more of these shareable services emerging at the same time as tools like DreamFactory are coming on the scene to make it easier to assemble them together. With supply turned on and enablement in place, it just needs users to start generating demand and the market will be ready to take off.

posted by Phil Wainewright 2:59 PM (GMT) | comments | link

Friday, March 26, 2004

Small-minded orchestration

BEA and IBM this week published an ill-conceived mongrel specification called BPELJ. Edwin Khodabakchian has done a great job of listing the technical shortcomings in a posting to his blog. I'm simply appalled at the defective thinking that is being used to promote this mongrel spec.

Its authors attempt to justify their poisonous cocktail of BPEL and Java by introducing their whitepaper with some sanctimonious twaddle about BPEL being geared towards "programming in the large," while the role of languages like Java is to perform "programming in the small."

What this fine-sounding rhetoric attempts to obscure is that BPELJ is a fix for Java programmers who can't get to grips with developing proper, platform-neutral BPEL code, and instead allows them to build as much as possible of their orchestration logic using Java and J2EE.

I don't have a problem with people writing mongrel code if that's what they want (or are obliged by circumstances) to do. But if it's a dog, let's call it a dog, and let's put up some warning signs so people understand what they're taking on here.

What the authors really mean by the phrase "programming in the small" is software that's tightly coupled to a specific environment. This requires an approach that we can usefully describe as "small-minded programming," since it combines advanced skills inside its narrow focus with a complete lack of concern about what happens when attempting to link with other systems. It's a totally appropriate programming style to use when developing a high-peformance system that's dedicated to carrying out a specific function — in other words, the sort of system that very often sits behind an individual service interface within a loosely coupled orchestration.

It's no surprise that IBM and BEA would want to encourage and preserve this kind of approach among Java programmers, since they both market very popular Java development and deployment platforms, and it is clearly in their interests to encourage developers to increase their commitment to those platforms. But it strikes me as frankly irresponsible to dress it up as something that is respectable and acceptable to mix with the loosely coupled "programming in the large" of BPEL service orchestration, which is designed to co-ordinate processes irrespective of the underlying environments.

Effective use of BPEL requires an approach that is best described as "globally-minded programming." It means writing code that recognises the likelihood of encountering resources that have been created using unfamiliar notions, in unknown environments. One of the fundamental tenets of writing this kind of code is that you include external functions by calling services. Thus if, as the authors of BPELJ quite rightly state, there are computational functions that native BPEL doesn't do, the correct way to make those available to the orchestration are by providing them as services. I know this will seem an unnecessarily verbose and convoluted way of doing things to a small-minded programmer, but a globally-minded programmer understands why things have to be done that way when building an effective, adaptable orchestration.

BPELJ is a sleazy compromise that some projects will be tempted to adopt as a shortcut to reaching a working solution. That's fine, so long as those involved remember that at some point in the near future they'll have to go back and refactor their BPEL code to replace all the Java snippets with proper service calls. It ranks on a par with creating an enterprise application on the assumption that you can add internationalization later on. This specification should come with a health warning slapped on the side of it.

posted by Phil Wainewright 10:58 AM (GMT) | comments | link

Thursday, March 25, 2004

Brownfield development

Integration is never a greenfield proposition. Since the main attraction of web services is the ability to link to other systems, it therefore follows that the vast majority of web services projects are going to be what town planners call brownfield developments: new structures that have to coexist with the existing cityscape. Often, there isn't even the luxury of building as new from the ground up. Developers find they have to build new facilities within the shell of an older structure, respecting existing services and the underlying infrastructure.

Everyone loves starting with a clean sheet, but there comes a point where it's just too expensive to wipe the slate and start again. Today's enterprises have reached that point with their information technology infrastructure, and so the adoption of web services is following a much more tortuous path than earlier waves of IT. It has to be undertaken carefully and systematically, adding new layers of capabilities without disrupting or undermining what's already in place.

Enterprises are feeling their way with little to guide them, since this is new territory for everyone. It was a striking (and unplanned) coincidence that the two most recent articles on Loosely Coupled have both been about partial adoption of service oriented architectures. This week we published an article about webMethods, which has aggressively service-enabled its integration product suite, under the headline Hedging your bets on SOA. Although webMethods has made this leap, there's no certainty that its customers are yet ready to follow its lead, hence the reference to hedging in the title.

Last month, we wrote about a customer of another integration vendor, SeeBeyond Technology. We called the article Going halfway to SOA, because this customer has explicitly held back from fully embracing SOA. I think this is probably a fairly typical case. Moving to SOA is something that has to be seen in terms of a strategic plan, spread over as much as a ten- to fifteen-year time span in total. Within that overall gameplan, enterprises will step through a series of pragmatic, tactical objectives that gradually advance their wider strategy. Hedging their bets and planning to get at least halfway there are both part and parcel of that approach.

All of this leaves vendors in an uncomfortable position. They need to be offering the right technologies and tools when their customers need them — and customers will be advancing at a variety of paces, some very fast, others much more measured — but if they jump the gun and get ahead of their customers, that will be just as damaging as falling behind.

Integration specialists like webMethods and SeeBeyond have to maintain this balancing act while competing not only with each other, but also with new rivals who want to encroach on their space. It's interesting to put the webMethods article side-by-side with another article we published a few months ago about BEA Systems, which also used a betting analogy for its headline: BEA stakes future on SOA adoption. Both vendors are equally keen to encourage their customers to embark on an SOA path. Others have similar strategies. Yet I can't help being suspicious of how much exactly these vendors are embracing the standards-based interoperability that SOA implies. Are they really going all the way, or will customers who follow their roadmap end up embracing nothing but a pig smeared with services lipstick?

I'm looking forward to publishing more answers to that question here on Loosely Coupled in the coming months. Regular readers may have noticed our output dropping to something of a trickle in the past few weeks. Rest assured it will recover. For the moment, our efforts are being diverted into preparing the launch of a new, paid publication that will be available from the site round about April 6th. It's a monthly subscription newsletter, to be issued in PDF format, called the Loosely Coupled digest. That link just has a pre-launch holding page for the moment; next week, we'll publish some detail about what's in the first issue.

posted by Phil Wainewright 11:14 AM (GMT) | comments | link

Wednesday, March 17, 2004

Disruptive insight

Ideas that were once fresh and innovative eventually become stale and hackneyed. A simple insight, convincingly expressed, can catapult a young academic into the limelight, launching a stellar career of glowing citations, lucrative conference appearances and best-selling business books. But as time progresses, the need to sustain that career leads to increasingly complex embellishments and refinements to the original idea. The established guru is faced with a simple choice: should he make incremental improvements to a very lucrative concept that audiences love, or should he invest a lot of effort in researching and developing fresh insights that nobody has ever expressed any interest in hearing about?

That was the thought that struck me as I read Phil Windley's account of OSBC2004: Clayton Christensen on Disruptive Technologies and Open Source, a presentation by the author of The Innovator's Dilemma and, more recently, The Innovator's Solution. Not that I believe Clayton Christensen has run out of steam; far from it: I'm a great fan of his work. But I am starting to fear that he may overcomplicate his message with unnecessary add-ons that risk obscuring the core concept for some readers.

I do feel though that the original insight is as fresh and as relevant as it was the day he first formulated it, and that the concept of disruptive innovation can even be applied to the field of academic thinking — which, after all, is just another market. A truly disruptive insight is one that starts out languishing in the wilderness. No one gets it, and it runs so counter to the collective wisdom of the academic establishment that they either ridicule it or, worse, they simply ignore it. But if its time has come, it gradually gains ground until eventually it takes its place in the mainstream canon. At that point, a truly insightful thinker faces an uncomfortable dilemma: does he continue to pursue disruptive ideas, and end up reviled and rejected like Socrates? Or does he accept the applause of the crowd and forswear any further disruptive thinking?

One additional item, then, that startups might add to Christensen's list of questions to ask themselves: "Which management gurus are offering fresh insights that will give us an edge in today's business climate?" Classics like Christensen should certainly be on everyone's reading list, but gaining an edge means seeking out the next generation's winning ideas, too.

posted by Phil Wainewright 10:30 AM (GMT) | comments | link

Friday, March 12, 2004

Up the stack

Grand Central's deal with AT&T is a terrific endorsement, but will it bring in any new business? It was cunning of the telco to launch WebService Connect with a live customer already in place, but of course Thomson Financial was already a longstanding customer of Grand Central's. So it's rather disingenuous, to say the least, for AT&T to describe Thomson as one of its first customers, when Thomson came already bundled with the product, as it were.

One of the things that I've found very interesting over the past year has been the way that telecoms carriers have been climbing onboard the web services bandwagon. The first was BT, which launched a hosted web services offering, initially with Flamenco Networks, and then later with AmberPoint. Actional, whose website (at the time of writing) also claims AT&T as one of its customers, has MCI on its books.

Carriers have a dual interest in web services technology. On the one hand, telecoms as an industry has always been an early adopter of information technology, so they're all testing it out for use in their in-house operations — in fact, the first case study we published on Loosely Coupled described how BT was using web services to integrate customer service applications (see EAI? Keep it simple, says BT). But much more important is their expectation of new revenue streams from offering hosted web services infrastructure to their customers.

The carriers' traditional revenue streams are still strong, but their ability to harvest enormous profits from those revenues is fading fast. Behind the headlines of the dot-com bust, a much bigger collapse was taking place as the next-generation telecoms industry imploded. There's enormous spare capacity out there today, and companies like MCI (the former WorldCom), XO and others have had much of the cost of building it wiped clean by their passage through Chapter 11 bankruptcy. Further price competition is on the way from structural changes like the emergence of low-cost Voice-over-IP (VoIP), which challenges the profitability of long-distance telephone, and cellular WiFi, which allows subscribers to connect to the network without having to pass through the local phone company's cables.

Faced with these challenges, carriers look to web services as their opportunity to move up the stack, away from unprofitable physical networking, into much more lucrative value-added software networks. They reason that they have a proven, trusted track record of providing reliable voice and data network services to their customers. Therefore, they believe, it's only natural that their customers will turn to them for the next generation of software networks based on service oriented architectures.

I'm interested to see this happening because I witnessed the carriers getting equally excited just a few years ago about the emergence of application service providers. They saw themselves as the natural providers of hosted applications to their customers, and they invested millions of dollars in positioning themselves for a lucrative market ... that never emerged.

Several years later, hosted applications are finally emerging, but from a completely different quarter than the carriers had predicted. And I think the same thing will happen with the carriers' efforts to position themselves to win the web services market. The fundamental flaw with their thinking in both cases is the concept that they're simply offering a different kind of network infrastructure proposition. Customers are not investing in web services because they love infrastructure. They're investing because they want specific business results. If they're interested enough in the infrastructure per se, they'll want to build it themselves. If they're not, they'll turn to providers who can combine infrastructure with finished services to run over them. That's why AmberPoint will probably make a lot more money from having value-added B2B network operator GXS as a customer than it will from BT. And why Systinet will do well out of providing the infrastructure software for Amazon's supplier network.

As for Grand Central Communications, the deal with AT&T puts it firmly on the map. But if it's hoping to get somewhere, it mustn't rest on its laurels and expect AT&T to gain any traction. I shan't be at all surprised if most of WebService Connect's early customers end up being introduced in the same way as Thomson Financial. Grand Central understands what it's offering. AT&T may think it does, but I doubt it really has a clue.

PS [added March 15]: More evidence that infrastructure providers don't understand application provision came in last week's news that hosting provider Interland has called time on development of Trellix, the hosted website creation business that it bought a year-and-a-half ago. Trellix (which once played a part in keeping Blogger going during a tough patch) was the brainchild of VisiCalc founder Dan Bricklin, who announced the shutdown in his blog last week: End of an era as Trellix office closes. If a website hosting company like Interland can't get to grips with a hosted website builder like Trellix, what chance does a telecoms company like AT&T have with an application infrastructure proposition like WebService Connect? Believe me, the skills just aren't transferable from one layer of the stack to another. They're absolutely different worlds.

posted by Phil Wainewright 10:22 AM (GMT) | comments | link

Friday, March 5, 2004

Amazon commercializes RSS

RSS broke out of its news-and-weblog-tracking ghetto this week when Amazon.com expanded its range of syndicated content feeds. The online retailer published a new list of about 200 ready-made RSS feeds, each showing the top-selling items for a particular subcategory or search term.

The functionality is not new; the RSS feeds have been available to anyone who could work out how to create them since last summer. Paul Bausch explained how it works at the time, but you had to be comfortable working with the technical details to take advantage of it. Doing it the hard way does give you more choices: here for example is an RSS feed of the bestselling books on web services that I just created (you may need to 'view source' to see the feed in your browser). If you don't have an RSS reader to hand, here is the feed converted to HTML. If I wanted to be really clever, I could change the "f=http://xml.amazon.com/xsl/xml-rss091.xsl" parameter at the end of the Amazon URL, and substitute my own XSL stylesheet to convert the information into other formats (which would be a neat demonstration of the virtues of transformation).

But most people can't be bothered with all this messing about with URIs and XSL (Eric Raymond rightly calls this sort of thing An Open-Source Horror Story). They just want a link they can click on, and that's why Amazon has gone to the trouble of publishing the 200-or-so most popular RSS selections. By doing so, the company has stepped beyond the realms of what's technically feasible and ventured into the much more populist arena of giving people a finished product. So even though the more techically astute may have wondered what all the fuss was about, internetnews.com was right to highlight the move in its news story, Amazon.com Joins RSS Bandwagon.

The other thing that's interesting is that this is the first mainstream example of a retailer using RSS to disseminate its product catalog. Every item in those feeds carries a price, with a direct link to a page offering the product for sale. That's qualitatively different from the mainstream uses of RSS that have been seen in the past, almost all of them devoted to disseminating information and signally lacking a revenue model. Amazon's embracing of this new medium — reaching out to deliver filtered excerpts from its catalog to a fast-growing marketplace of RSS readers — is characteristically original in its thinking. Where Amazon leads, other online retailers would be well advised to follow.

UPDATE [added March 7th]: David Galbraith points out that Amazon's RSS feeds show up the format's current weaknesses by lumping price along with other bits of metadata into the RSS <description> field. His posting has attracted a flurry of comments from industry luminaries including Dave Winer, Phillip Hallam-Baker and Amazon's own web services evangelist, Jeff Barr. It's one of those threads in blogland where you can actually see the collective wisdom advancing as the interaction continues (a good example of how Blogs Can Be Infectious?). Jeff rightly points out, as I noted above, that anyone is free to transform the data into a more meaningful feed — in response to which Les Orchard has already set up an alternative XSL stylesheet to create a demo feed. As I was saying before, Only transform.

posted by Phil Wainewright 10:28 PM (GMT) | comments | link

Assembling on-demand services to automate business, commerce, and the sharing of knowledge

read an RSS feed from this weblog

current


archives

latest stories RSS source


Headline news RSS source


 
 


Copyright © 2002-2004, Procullux Media Ltd. All Rights Reserved.