Showing posts with label Standards. Show all posts
Showing posts with label Standards. Show all posts

Saturday, 23 September 2023

U.S. Strategy for Standards for Critical and Emerging Tech

In May of 2023, the White House published a document titled, “United States Government National Standards Strategy for Critical and Emerging Technologies.”  The Executive Summary states:

Strength in standards development has been instrumental to the United States’ global technological leadership. Standards development underpins economic prosperity across the country and fortifies U.S. leadership in the industries of the future at the same time. Bolstering U.S. engagement in standards for critical and emerging technology (CET) spaces will strengthen U.S. economic and national security. The U.S. Government has long engaged in these standards development processes through an approach built on transparency, private sector and public sector leadership, and stakeholder engagement—a process that reflects the United States’ commitment to free and fair market competition in which the best technologies come to market. Government support for scientific research and development (R&D), an open investment climate, and the rule of law have also been critical for U.S. standards leadership. America’s workers, economy, and society have benefited significantly as a result, as have those of like-minded nations alongside which the United States has collaborated to forge technological progress.

Today, however, the United States faces challenges to its longstanding standards leadership, and to the core principles of international standard-setting that, together with like-minded partners, we have upheld for decades. Strategic competitors are actively seeking to influence international standards development, particularly for CET, to advance their military-industrial policies and autocratic objectives, including blocking the free flow of information and slowing innovation in other countries, by tilting what should be a neutral playing field to their own advantage.

The United States must renew our commitment to the rules-based and private sector-led approach to standards development, and complement the innovative power of the private sector with strategic government and economic policies, public engagements, and investments in CET. By supporting our unrivaled innovation ecosystem and related international standards development as part of a modern industrial strategy, we can ensure that CET are developed and deployed in ways that benefit not only the United States but all who seek to promote and advance technological progress. Strengthening the U.S. approach to standards development will lead to standards that are technologically sound, earn people’s trust, reflect our values, and help U.S. industry compete on a level playing field.

This strategy outlines how the U.S. Government will strengthen U.S. leadership and competitiveness in international standards development, and ensure that the “rules of the road” for CET standards embrace transparency, openness, impartiality and consensus, effectiveness and relevance, coherence, and broad participation.

Wednesday, 14 January 2015

Toyota: peace, not war

Here's a guest post from Nigel Swycher (founder and CEO of AISTEMOS) and a keen observer of the patent scene.  Writes Nigel:
Toyota: peace, not war 
Is it just me, or are patents now a mainstream business issue?
Last week’s announcement by Toyota is a perfect illustration of the trend.  They have proudly announced that more than 5,600 fuel cell and related patents are available for royalty-free use.  There is no absence of detail: 1,970 patents related to fuel cell stacks, 290 are associated with high-pressure hydrogen tanks, 3,350 cover fuel cell software and 70 govern production and supply.
This is fascinating stuff, and it is the same genre as Tesla's open sourcing of its battery patents.
The commercial backdrop of both of these announcements is exactly the same.  How do you persuade industry to enhance new technology?  Both Toyota and Tesla know that there is no future for hydrogen or battery powered cars unless there are (a) lots of cars and (b) requiring refilling stations.
This is familiar territory and heralds the development of new standards.  These standards will either be free (such as Bluetooth and W3C) or royalty-bearing (the FRAND licensing associated with ETSI mobile standards).  In the context, Toyota may not have had much choice.  If battery technology is to be available royalty-free, could hydrogen fuel cell technology really be royalty-bearing? 
What is just as interesting as Toyota’s announcement was that it was made in its flagship CES 2015  presentation. Looking through other CES announcements highlights the technology theme.  There are almost as many announcements from Google, Nvidia and Qualcomm as from Audi and VW (focusing on self-drive and gesture-controlled vehicles).
'Phones on wheels' ... 
For the automotive sector, the way the way forward is clear.  Cars are now pure technology play.  As one analyst reported, cars are transforming into "phones on wheels". There are many other sectors that are following suit. Samsung’s focus at CES was on the internet of things and sweeping statements that, in the next two years, 90% of their devices will be part of their “internet of things strategy”. The future looks highly similar for banks where online and mobile is changing everything. 
If technology is really taking over all aspects of our lives, it is no surprise that the intellectual property issues and tensions that have dominated the horizon for mobile phones and tablets (the so-called patent wars) will soon spread into other sectors that have to this point not had to concern themselves with patents. 
Times are certainly changing and any company that does not have an IP strategy as part of its business strategy is going to be significantly disadvantaged. 
No reader of this blog is likely to argue with the proposition that a business shorn of IP policy will be disadvantaged. However, there is plenty of room for discussion as to whether Toyota's decision is the correct one and, if it is, whether its impact has been diminished by the sudden drop in oil prices.  Comments?

Footnote: there's a handy and accessible Aistemos Cipher snapshot of the fuel cell patent landscape that you can peruse here.

Friday, 11 January 2013

Incentives to Collaborate: WIPO Article and the US DOJ/USPTO Guidance Letter

In the December 2012 WIPO Magazine there is an excellent brief article concerning patent pools and standards titled, “Collaboration in Intellectual Property: An Overview,” by distinguished Harvard Business School Professor Josh Lerner and doctoral student Eric Lin.  The article describes the increase in patent pools in the last 15 to 20 years after a period of regulatory distrust of such collaborations since the 1940s.  The article notes that many questions remain for research relating to collaborations and makes suggestions for future research, but also states that some lessons can be learned from the existing literature, such as “requiring patent pools to engage in independent licensing.”  The article also argues that regulatory agencies should “actively encourage socially beneficial collaborations” instead of focusing on the potential anticompetitive consequences of such collaborations.  Specifically, the authors note that France, Germany and the United Kingdom provide benefits to participants in certain collaborations.  Moreover, the authors caution that US regulators may be too zealous in prohibiting discussions of price by standard setting organizations and this may waste time.  The authors suggest a “temporary safe-harbor status to firms that wish to explore the feasibility of collaborating.” 

The United States appears to be taking some steps towards ensuring that the International Trade Commission does not act in a way that creates a disincentive to participate in or create collaborations.  In a January 8, Joint Statement by the United States Department of Justice, Antitrust Division (DOJ) and the United States Patent and Trademark Office, Office of the General Counsel (USPTO), the DOJ and USPTO provide guidance to the International Trade Commission concerning whether exclusion orders should issue in all cases if standards essential patents offered on F/RAND terms are infringed.  The DOJ and USPTO clearly explain the benefits of patents as well as the benefits of voluntary licensing such as F/RAND licensing, and ultimately caution that exclusion orders in particular cases could result in providing disincentives to participate in F/RAND licensing.  The Intellectual Property Watch provides a description of the report here and a copy of the report is available here.   A good first step? 

Wednesday, 16 May 2012

The Folly of Picking Winners in ICT

The IP Finance weblog welcomes the latest guest post from Keith Mallinson (WiseHarbor) on a subject which he has really made his own: the subtle interplay of sometimes competing and sometimes congruent private and public interest in the shaping of the dynamics of the market for the licensing of technology in the information, communication and telecom sector.  This post touches on a topic that has been the subject of all-too-little discussion in IP circles.  IP rights help to establish the existence of winners in the marketplace -- but who gets to decide who those winners should be?
The Folly of Picking Winners in ICT

Government attempts to favour and promote certain business models, companies and technologies are justifiably criticised. The UK Cabinet Office’s proposed policy to mandate the use of only pre-selected, royalty-free standards in public ICT procurement is similarly flawed. This will limit choice by foreclosing many popular open standards, numerous products which adhere to them and companies who depend on upstream licensing revenues. The Open Standards Board responsible for implementing this policy will face significant governance challenges in ensuring impartially in standards selections. In contrast, free-market processes allowing competition among a much wider array of open standards and software licensing maximises customer choice across many different government departments, fosters innovation, reduces lifecycle costs and enables obsolete or poorly performing standards to be superseded.
Mandating particular standards and discriminating against or excluding royalty-based business models in government procurement constitutes hazardous industrial policy for the UK. The government is the largest UK ICT spender on with annual expenditures of approximately £18 billion in recent years. Direct and likely indirect consequences of this large purchaser on the ICT marketplace, such as explicitly or implicitly obliging citizens, as well as government suppliers of other goods and services, to adopt the same standards, would be significant with this policy.

Dirigisme versus facilitation
Governments have a history of making bad decisions in championing particular companies, technologies and business models. For example, the Inmos semiconductor company received £211 million from the UK government in the 1970s and 1980s with its strategy to produce commodity D-RAMs and develop its “transputer”, but the company foundered, did not become profitable after many years and was sold to SGS-Thomson in 1989. The UK is effectively nonexistent in semiconductor manufacturing today. The UK’s “fabless” semiconductor companies such as ARM, Picochip (acquired by Mindspeed Technologies in 2012) and Icera (acquired by NVIDIA in 2011) rely on partners including foreign “foundries” to fabricate their designs. 
State monopoly France Telecom forced adoption of the Minitel videotext online service in the 1980s by withdrawing phone books and spending billions giving away the terminals to citizens. The associated technological standards and equipment manufacturers made minimal headway with Minitel technologies abroad and were eclipsed by the advance of the Internet in the 1990s. Minitel provided consumers with their first means of online access. However, views on long-term benefits to French consumers are mixed. Resistance to replace the entrenched home-grown standard caused France to be a laggard in Internet adoption.

In contrast, supporting entire industry sectors where a nation has strategic strength is more justifiable and attracts widespread support from various commentators. For example, clustering of complementary and competitive companies can be beneficial. In these circumstances, market forces spur competitive behaviour, including some Schumpeterian “creative destruction”, which helps eliminate the sclerosis and risks that come with monoculture. For example, Silicon Valley in California provides a fertile technical and commercial environment in which various business models and many ICT companies, standards and products have flourished while others have failed.

Better for less

A key stated objective with the proposed Cabinet Office policy is to level the “playing field” for open source and proprietary software. It is, therefore, perverse that standards based on Fair Reasonable and Non-Discriminatory (FRAND) licensing and requiring patent fees should be the principle target for elimination with this policy. The policy will automatically also exclude many proprietary offerings that are based on those standards and which cannot practically be adapted to other, royalty-free, standards. In many cases, such standards are widely implemented by many suppliers and are used by the vast majority of business customers and consumers.

The cabinet office seeks to mandate specific royalty-free standards to achieve various objectives including cost reduction and avoiding vendor lock-in, as well as making ICT solutions fully interoperable. However, a report entitled Better for Less, published in 2010 by Liam Maxwell, now Deputy Government CIO and the proposed policy’s champion, identifies that most UK government ICT spending is with systems integration companies including HP/EDS, Fujitsu Services, Capgemini and IBM. The Government's over-reliance on large contractors for its IT needs combined with a lack of in-house skills is also a "recipe for rip-offs" according to a report by the Public Administration Select Committee (PASC) in July 2011.These suppliers are typically deeply embedded with long-term contracts that government finds difficult to unravel.

Software represents only a relatively small playing field in comparison to others in ICT spending. According to Forrester Research figures, market segments where open source software competes or combines with proprietary software products represent just 12.4% of $2.5 trillion total global business and government ICT expenditures including operating system software (1.0%), non-custom-built applications (6.7%) and middleware (4.7%). In comparison, IT services (11.6%) and outsourcing (9.8%) combined represent 21.5% of spending. Computer equipment represents 13.9%. The $2.5 trillion total appears to exclude very significant costs for internal staffing.

Software licensing costs are included even in modestly-priced PCs. The PASC report also indicated it was “ridiculous that some departments spend an average of £3,500 on a desktop PC”. A 2011 Cabinet Office press release stated it would “end poor value contracts such as those where Government departments and agencies paid between £350 and £2,000 for the same laptop”. The response to a government procurement freedom of information request on this matter by fullfact.org shows that while these prices actually represent totally different PC specifications, the proprietary operating system and office document software is identical in each case, with differences relating to microprocessors, displays, wireless modems and functionality such as fingerprint recognition accounting for the very large pricing disparity.

Uncertain scope, invalid distinctions

The proposed policy states that standards selection will be limited to software interoperability, data and document formats. The scope of these terms is unclear. And, in the next few years it will become even more difficult meaningfully to separate standardisation in these from other domains. The consultation’s terms of reference make the invalid assumption that software is distinct from hardware and that telecommunication is distinct from computing. Evidence weighs against these arguments with increasing technological convergence and other changes in ICT. Smartphones and tablets are becoming the dominant computing platforms in our personal lives and at work. Similarly, PCs have overtaken mainframe computers and revolutionised ICT usage since the 1980s. Communications is intrinsic to these new mobile devices and is increasingly integrated with most desktop PCs including web, and cloud-based usage where demarcations between software, hardware and service are submerged.

Video is becoming most prevalent. According to long-standing Cisco CEO, John Chambers, in a recent Bloomberg Business Week article, “Every device, five years from now, will be video. That’s how you’ll communicate with your kids, with work.” Switching video standard is nothing like the peripheral task of simply replacing or adapting the mains plug on a TV set. Interoperability standards for video compression and encoding are highly complex algorithms that are deeply and extensively embedded in the workings of core hardware and software. Around one third of Internet traffic is streaming video and mobile video traffic already exceeds 50%.Virtually all of that conforms to FRAND-based standards requiring patent licensing, including AVC/H.264 (MPEG 4 Part 10) with most widespread adoption.

The customer is always right

Standards requirements change with technological innovations and shifting user needs. It is very difficult for any centralized government administration to anticipate or react with the dynamics of ICT supply and demand. Competition among standards is highly beneficial. Market forces precipitate occasional revolutionary changes with new standards displacing old standards (e.g. HTML substitutes for videotext standards such as that used by Minitel) and continuous, incremental improvements to existing standards (e.g., HTML5 replaces previous versions of HTML). Changes in user preference and demand can be difficult to predict. For example, within a few years of the introduction of Apple’s iOS-based iPhone in 2007 and Google’s Android in 2008, former smartphone market leaders Nokia and RIM, each with its own operating system software, were completely up-ended. The highly innovative capabilities with the new software platforms and devices have succeeded because they are very different to and much better than what they have replaced.

Different government departments have diverse needs. Whereas interoperability among UK government departments is important, so is optimising interoperability and access by end users, commercial partners and international organisations. Defence requirements can preclude the most widespread propagation of interoperability and encryption standards. Maximising functionality, security and interoperability for patient records among health authorities will be compromised by imposing standards that are chosen to accommodate requirements in education.

From a user’s perspective, functionality and interoperability with other users trumps supply-side considerations including the number of prospective ICT suppliers and lowest price.

Upstream savings, downstream costs

While seeking to eliminate licensing fees, open source software and royalty-free standards do not ensure lower overall costs. On the contrary, there is significant evidence that open source is no cheaper than proprietary solutions, including total ICT lifecycle costs with project implementation and support. In many cases, total costs may also be lower with technical efficiencies and large economies of scale that arise from the implementation of popular royalty-charging standards. It is practically impossible to create some high-performance ICT standards without infringing any patents for which royalties might be demanded.

Patent fees on popular FRAND-based standards are typically modest. Patent pool administrator MPEG LA licenses 2,339 patents it deems essential to H.264 from 29 licensors to 1,112 licensees for a maximum per unit rate of $0.20. This covers the vast majority of patents declared as essential to the standard. With around 6 billion mobile phones in service worldwide, aggregate royalties are low enough for GSM phones to be sold at price points down to less than $20. However, these fees significantly enable technology companies with upstream business models. They also allow vertically-integrated players to recoup some of their development costs from companies with downstream business models who make products but do not invest in developing the standards-based technologies. Eliminating the possibility of royalties merely forecloses upstream business models in favour of the downstream businesses, such as those that dominate government ICT spending, including hardware manufacturing, systems integration, technical support and outsourcing.

Open and competitive ICT markets allow the widest range of business models and licensing practices, including royalty free standards and open source software. There are many examples of open source software running on FRAND-based standards requiring royalty fees. For example, there are various proprietary and open source software codec implementations available for the H.264 video standard. It would be nonsense to bar this standard in favour of another standard that has only tiny adoption (the most fundamental barrier to interoperability among users), inferior or unproven performance including technical compliance and interoperability among implementations. And, in the case of video, for example, it would most likely infringe some of the very same patents used by the successful standard it would be replacing. So there is a significant possibility that patent fees would be required despite wanting to wish them away. Developing a high-quality video codec standard is a formidable task drawing upon lots of intellectual property. Designing around the best technologies to avoid royalty bearing technologies will result in inferior standards and implementations.

There is generally no conflict between open source licensing and paying patent royalties to third parties. In certain cases where there is conflict, this is the problem of the licensors’ making. The most stringent open source licenses; such as GNU GPLv3—in which “patents cannot be used to render the program non-free”—is seldom used because of such conflicts. In cases where licensing prohibits patent fees, the only legal solution is for such software to be written to ensure it does not infringe any IP that has not also been specifically declared royalty free by its owner.

Governance with selector selection

The Open Standards Board responsible for implementing the policy will face significant governance challenges in ensuring impartiality in its members and the standards selection processes they oversee. It will be difficult to recruit board members who have the required competence in ICT standards, and who as individuals, employees, or academics, are completely free of any interests in the outcome of any standards selections. Members will be affected by their other interests in specific companies, standards groups and business models.

International harmonisation and liberalisation

The European Commission’s approved guidelines on the applicability of Article 101 of the Treaty on the Functioning of the European Union (TFEU) for horizontal co-operation agreements recognise the importance and value of standardization agreements.
“Standards which establish technical interoperability and compatibility often encourage competition on the merits between technologies from different companies and help prevent lock-in to one particular supplier.”
These guidelines lay out a comprehensive approach for conformity of standardisation agreements with Article 101 TFEU, creating a “safe harbour” while affording standard-setting organisations significant autonomy in setting policies for disclosure of IP and its licensing terms. FRAND licensing, with and without payment of royalties, is explicitly recognised. Licensing policies of many international ICT standards-setting organisations including IEEE, ETSI, ITU-T, CEN/CENELEC are consistent with these guidelines and the charging of patent fees on their standards. It would be a travesty to exclude their standards from government usage in the UK, even if this was only on the basis of attempting to do so for what the Cabinet Office delineates as software interoperability, data and document formats.

Friday, 3 February 2012

ICT esperanto and competition among standards

It has been a little while since the IP Finance weblog has hosted a piece from regular guest contributor and ICT patents and standards expert Keith Mallinson (WiseHarbor) -- but we are pleased to welcome him back for his first post for 2012:
ICT Esperanto and Competition among Standards

Open and competitive ICT markets produce many standards, but not all will flourish or even survive. With free choice, customers and their users often overwhelmingly plump for one standard over others in pursuit of highest performance (e.g., from HSPA in 3G cellular) or widest interoperability (e.g., with SMS for mobile messaging).

Significantly different levels of compliance and interoperability among vendors, all notionally supplying to the same standard, may also cause customers to favour one vendor over all others. If product performance and interoperability is inferior to that of established standards from leading vendors, the introduction of new and open standards or alternative vendors will fail with customers. Increasing choice of standards or suppliers is useless if implementations do not work properly. In many cases that requires full interoperability with large bases of existing users.

However, coexistence of competing standards, including proprietary extensions to these, is also vital to facilitate innovation, satisfy diverse requirements and enable the emergence of new leading standards. Strong user preferences for certain standards justifiably rewards those who have had the foresight, taken the risks and made the investments to develop and promote them, and build a user base for their fully-featured and standards-compliant products.

Evolution beats creation with rich and widespread use

The need for high levels of performance and interoperability in ICT can be illustrated with an analogy in human language. According to Wikipedia, the goal with Esperanto’s creation was “an easy-to-learn and politically neutral language that transcends nationality and would foster peace and international understanding between people with different regional and/or national languages”.  However, using Esperanto has never become more than a niche activity. It is spoken by less than 0.1% of the world’s population versus more than 10% with English as a first or second language. Evolved languages have richer vocabularies, much more extensive literature and large numbers of existing mother-tongue and second-language users. English, Spanish, French, German and other languages have remained preeminent across regions, nations and within particular international domains in business, art, ICT and engineering.  Customer preference and supplier leadership for books and courses by indigenous organizations, for example, Collins for English as a foreign language and the Goethe Institute for German, are a natural consequence in development of market supply for language education.

Improving interoperability among different suppliers’ kit to eliminate, so called, vendor lock-in is a common requirement of centralized procurement authorities, but it is not, the only, let alone the most important need in selecting or mandating standards. Standards get displaced when alternatives provide distinctly better functionality or end-user interoperability. For example, the introduction of the GSM standard in mobile communications from 1992 was a major technological step forward from the various different analogue technologies deployed in European nations and no other digital standard was allowed to contend. GSM increased network efficiency with use of scarce spectrum allocations, introduced a succession of new capabilities including text messaging and data communications and enabled much wider geographic interoperability for users with international roaming. It also created a fresh start for European manufacturers who had been impeded by disparate national analogue standards with correspondingly fragmented handset and network equipment markets. The openness of the GSM standard also provided greater choice and created an expectation of less customer dependency on particular vendors. The needs of consumers, operators and equipment vendors were so much better satisfied with a new standard that it was quite possible to ignore the complications of backward compatibility and start afresh in European cellular with GSM.

Backward compatibility and dual-mode working

In order to preserve interoperability while adding new functionality and improved efficiency in ICT, it was sometimes necessary to create standards with backward compatibility to older standards or provide dual-mode working for many years. In the UK, two of only three TV channels were broadcast in monochrome with 405 line VHF transmissions until 1970, when colour was introduced with the addition of 625 line UHF transmissions.  Although the colour transmissions were backward compatible with 625-line UHF monochrome receivers, a simulcast was required to serve old, single standard, TV sets until the 405-line VHF network was eventually closed down between 1982 and 1985. In the US, cellular systems including incompatible and competing digital CDMA and TDMA technologies —introduced from the mid 1990s—were designed to incorporate existing analogue capabilities to ensure national roaming. The requirement for cellular licensees to maintain capabilities for analogue roaming throughout their networks was not relinquished until 2008.  Whereas 3G and 4G technologies (including WCDMA, HSPA, LTE and LTE Advanced) surpass the performance of 2G technologies (including GSM, GPRS and EDGE) with respect to spectral efficiency, network speeds and latency—2G technologies will most likely remain embedded in our phones for at least another decade so we can continue to roam widely worldwide. Similarly, HSPA will be included in multimode devices to maximise access to mobile broadband coverage where LTE is not present or uses incompatible frequencies. Most Blu-Ray players sold for decades to come will retain the ability to play our old libraries of regular DVDs.

 In fact, with low incremental costs of retaining old standards, due to software-defined capabilities running on cheap processing and memory, manufacturers find it increasingly attractive to retain old standards and incorporate multiple standards in their products. For example, as I draft this article, Microsoft Word offers me the option to “Save As” from among 18 different formats including .odt, (ODF OpenDocument format text), html, pdf and docx, as well as .doc. Apple’s iPhone 4S incorporates three distinctly different cellular standards: GSM, CDMA EV-DO and HSPA (plus several other closely-related and compatible standards in each case) across five frequency bands, plus WiFi and Bluetooth. Customer and end-user choice is maximized without excessive incremental costs. Crowds can now choose for themselves which standards they prefer to actually use.

Forbidden fruit is most succulent

Prohibiting use of popular standards and products in favour of “open” alternatives can significantly harm end users because, unsurprisingly, the former generally work best. Functionality, quality and interoperability among users must take precedence over ability to switch or mix suppliers. Document formatting standards provide a good example. It is a domain where standards selection has become a most prominent issue. While it is widely and correctly recognized there are problems with interoperability across different formats, e.g., going from ODF to OOXML, it is commonly and incorrectly assumed that all different vendor implementations of a particular document format will fully interoperate and faithfully reproduce identical documents after editing and saving.

Research by Rajiv Shah and Jay P. Kesan shows that what are supposedly the most open document standards do not in themselves ensure the highest or even satisfactory levels of interoperability in many cases when documents are transferred, edited and saved among different world-processing programs. On the contrary, compatibility among different vendors’ implementations of the same open document formats can be quite poor.  In contrast, the leading proprietary standard has the greatest functionality and this was best preserved when documents are exchanged, edited and saved only among different users with the same word-processing program or different programs from the same vendor.  This research included the three most popular word processor document formats: ODF that is generally regarded as the most open format, OOXML and .doc that is seen as most proprietary or closed.  Given that open standards do not ensure interoperability among different vendors; there is no guarantee of vendor choice and the resulting price competition that authorities such as governments expect from procurement policies that insist on what are commonly regarded as being the most open standards.

Interoperability case study

Will anything less that 100% standards compliance and interoperability ever be good enough?  Whereas that goal is unachievable – particularly given that most standards must regularly be updated with various changes and interoperate among other standards serving different purposes, such as, presentations and spreadsheets as well as word-processing – it can be highly desirable to reach as close as possible to that ideal. Personal experience illustrates how demanding conditions can be with risk of embarrassment or something worse with the seemingly slight incompatibilities or data loss. From time to time I am retained as an expert witness in litigation on temporary case teams with contributions from up to dozens of different firms (e.g., with many case co-defendants) including lawyers, economists and industry clients. Drafting and editing expert reports and other documents involves the “master” being passed around with changes to text, graphics, footnotes and redline “track changes” implemented by many different people before being finalised and hurriedly submitted in advance of a fixed deadline. According to Shah and Kesan, as referenced above, these are precisely the types of document features that tend not to be preserved when documents are modified and re-saved in different vendors’ applications –let alone when transferring from one standard format to another. Several years ago, I was satisfied with my checks to a certain finalised word-processing document, but was subsequently horrified to discover that in a chart – created in a presentation program and faithfully reproduced into the word-processing document, the background shading had moved in front of a graph line when the document was converted into .pdf format immediately prior to submission.  This obscured the key turning point that was the entire purpose of my chart. At the other extreme, many users may seek only basic functionality. They might, quite reasonably, prefer to trade-off functionality and interoperability in order to pay the lowest price possible or obtain the document program for free.

Winner may take all – but not for ever

Standards can rise from a variety or origins and for various reasons. Communications standards including fax for document images, SMS for mobile messaging, SMTP for email, TCP/IP and HTML on the Internet took hold rapidly and most extensively in their respective domains because the world lacked, while users desperately needed, standards with widespread adoption for interoperability. These characteristics were lacking, for example, in the closed environments with proprietary email systems used internally by corporations. The .doc and other office suite standards remain entrenched because they already provide the highest levels of functionality and interoperability with 95% of users. Usage also includes significant legacies with user-customized templates–including un-standardised macro programming–for particular business purposes such as order entry and monthly financial reporting.

The major fall in fax usage since the advent of interoperable and widely-adopted Internet-based email a decade ago (though most of us retain fax capability and still list fax numbers on our business cards), and a decline in SMS in recent years, show that even the most popular and seemingly enduring standards can eventually take a tumble with new technologies and alternative standards.

Mobile communications has flourished due to, not despite of, extensive competition among standards. Multivendor mobile technology supply has not significantly constrained functionality and interoperability because new mobile standards were developed from inception to achieve these. The U.S. has thrived and now leads the world in network deployment of HSPA and LTE technologies with the most rapid adoption of the most advanced smartphones. There has been competition among four different 2G technologies, and several 3G technologies including CDMA (including EV-DO), WCDMA (including HSPA) and most recently between WiMAX and LTE technologies. The latter two are standardized by rivals  IEEE and the 3rd Generation Partnership Project respectively. CDMA is standardized by another group called 3GPP2This in turn has spurred operator competition in the U.S. and also accelerated technology developments worldwide.  The 3G successor to GSM’s radio layer, which is called UMTS or WCDMA, has far more in common with CDMA than it does with GSM. Pioneering work in CDMA was of great benefit to WCDMA. There was a call to arms with LTE for cellular operators against WiMAX by Vodafone’s former CEO, Arun Sarin at the GSM Association’s 2007 Mobile World Congress.  Later that year, Vodafone and its CDMA technology-based partner Verizon Wireless announced they would both pursue LTE as their common next generation technology.  A keynote presentation by Verizon Wireless CTO, Dick Lynch, at the 2009 Barcelona show announced the LTE vendor line up and most ambitious launch dates. The acceleration and strength of commitment to LTE, precipitated by the WiMAX challenge, has ensured the latter will be kept to a minor position versus LTE.

Current consolidation of cellular standards development in 3GPP has not eliminated competition and it does not preclude significant challenges from standards groups such as IEEE in the future.  3GPP cellular standards have become increasingly strong versus 3GPP2 and IEEE wireless and mobile standards, but there is internal competition within 3GPP and rival standards bodies will continue to present competitive challenges with innovations in rapidly growing and changing markets. Notwithstanding the rise of LTE, based on OFDMA protocols, CDMA-based technologies such as HSPA are also continuously being improved to closely rival the capabilities of LTE.  Some 3GPP contributors have distinct technical and commercial preferences for one standard over the other.

IP financing and just rewards

When selecting standards, customers and end-users in particular want highest performance, most exhaustive compliance and widest user interoperability. It is no surprise customers and end-users tend to make the same selections. Apple’s popular iPhone, with its App Store, iOS and deep integration of software with silicon is a very closed and proprietary system; but this provides superlative performance end-to-end. Microsoft is a leading beneficiary in word-processing applications– with its leading .doc standard and with its contribution to OOXML because its implementations provide the richest functionality and most compliant interoperability among the widest base of users. Whereas there is no consensus on what qualifies, and what does not qualify as an open standard, the likes of GSM, HSPA, EV-DO, LTE and WiFi are as open as anything on offer in their respective domains. Major contributors, for example, Ericsson and Intel respectively, have been principle beneficiaries. Their gains are in upstream licensing income, downstream product markets or from both. Whichever way, returns are for taking significant risks and making investments in developing standards, products and markets full of standards-compliant users. 

Thursday, 21 July 2011

Collaborative standards for mobile technologies: a great deal for consumers

The IP Finance weblog is delighted to host another guest piece by Keith Mallinson (WiseHarbor) on the issues raised by the inclusion of patented IP within industry standards. Do please let us have your comments: Keith is most willing to deal with them.
"A Great Deal for Consumers in IP
As indicated in my previous IP Finance postings here, here, here and here, mobile technologies, devices, networks and operator services are highly standards-based with essential-IP licensing predominantly and successfully based on a system of (Fair), Reasonable and Non-Discriminatory terms. My articles show extensive competition with significant new market entry, an effective and vibrant innovation ecosystem including Standards Setting Organisations such as 3GPP in mobile communications (including partners ARIB in Japan, ATIS in the US and ETSI in Europe), modest aggregate royalty charges for essential IP compared to product and service expenditures, and declining consumer prices.


Holding out against hold-up theories


IP finance readers encouraged me to submit my first three IP Finance postings to the US Federal Trade Commission in response to its request for information and comments on “the practical and legal issues arising from incorporation of patented technologies in collaborative standards". In particular, the market facts-based analysis submitted in my compendium of articles counters FTC’s allegations of patent “hold-up” in its March 2011 report entitled The Evolving Marketplace: Aligning Patent Notice and Remedies with Competition. In this, it asserts that
... the patentee can use the threat of an injunction to obtain royalties covering not only the market value of the patented invention, but also a portion of the costs that the infringer would incur if it were enjoined and had to switch. This higher royalty based on switching costs is called the “hold-up” value of the patent. Patent hold-up can overcompensate patentees, raise prices to consumers who lose the benefits of competition among technologies, and deter innovation by manufacturers facing the risk of hold-up.
A Director of Standards at one major company wrote to me after reading my first two articles stating I had “done a great job in these two posts dispelling some of the unsubstantiated myths around the use of patents in the standards context”. He went on to write that “the FTC RFI actually asks questions that are clearly and concisely answered by your two blogs (and I suspect your third blog on upstream royalties and downstream benefits will address a couple more)”. He expressed his concern that whereas many academics believe “hold up was a real problem, but those from industry maintained that hold up was a theoretical problem created by academics”.  
Resurrecting ex-ante licensing auctions 
As part of the FTC’s consultation, it streamed a public workshop it held on 21st June 2011. Divergent views were expressed in vigorous, balanced and exhaustive debate in three panel sessions by representatives from a wide variety of corporate interests on key matters related to the alleged hold-up including IP disclosure, RAND licensing terms and the use of injunctions.Joseph Farrell, the FTC’s Bureau of Economics Director wrapped-up the all-day event with a closing presentation that provided no opportunity for further discussion. He presented the FTC as sole representative for consumers in the debate because consumers are notably absent from the table in SSOs, in licensing discussions and at this workshop. He asserted that suppliers are somewhat indifferent to the alleged hold-up because its costs are simply being passed on in elevated consumer prices. 
Significantly, he offered no evidence on the extent to which any cost savings in IP fees would actually be passed on to consumers and provided no indication of consumer harm versus the benefits that accrue to consumers from IP-owners generating a reasonable risk-adjusted return that can be reinvested in further innovation. Instead, he proposed resurrection of the much-criticized Swanson and Baumol ex-ante auctioning approach, in which technology owners would offer their essential IP for inclusion in a standard in “sealed bid” process designed to ensure (the bizarre and unreasonable objective, in my opinion) that the IP price is no more than the incremental value over the price of the next best alternative (even if the latter is priced at zero by a vertically-integrated player seeking to minimise its downstream in-licensing costs). 
In addition to numerous problems with that particular method of fixing prices, the evidence is that consumers are actually doing rather well with the efficient status quo in licensing IP. With standards of great complexity and involving hundreds or thousands of patents in mobile communications each covering different portions of each standard, it would be very cumbersome to administer IP auctions and there would be all manner of undesirable consequences. Whereas standards-based technologies are selected in a collective process on the basis of technical merit by a wide assortment of companies who generally negotiate licensing terms on a separate bilateral basis, auctions would constitute collusion among purchasers and would likely unduly emphasise IP price over other important factors (such as functionality, features, performance, and even total system cost and price to consumers). This would be anticompetitive for the same reasons that have prohibited other forms of collective price setting in various SSOs. Substituting the proposed auctions for outlawed collective negotiations neither eliminates nor diminishes the spectre of “monopsony”. Technology selection is a complex process that would be impaired with the rigidities of an auction. IP is most commonly priced on a portfolio basis with essential IP and other patents licensed in a bundle covering the complete implementation of the standard. 
Licensees simply do not want to license only the patents covering a small portion of the standard if the licensor owns other patents that cover other parts of the standard; they need and want the entire bundle of essential IP.  The IP price is just one among many factors included in licensing negotiations. Setting standards is not a one-off event; it is an evolutionary process including a succession of numerous incremental additions within standards such as GSM with GPRS and EDGE and within WCDMA with UMTS rev 99, UMTS rev 4, UMTS rev 5 etc to include technologies such as HSPA, and most recently within LTE. Various parties prioritise these factors differently in different bilateral negotiations which enable the most efficient outcomes for all in licensing agreements. In return for cross licenses, vertically-integrated manufacturers are incentivised to under-price for inclusion of their IP in standards, versus upstream licensors, because this would minimize their costs of having to license-in from others. Even Swanson and Baumol have expressed concerns that the opportunistic exploitation of ex post market power “will be magnified if the IP owner is also a participant in the downstream market”.


My previous IP Finance posting also illustrates the battle of business models between upstream licensors and vertically-integrated manufacturers. My analysis measures the financial incentives the latter have to minimise overall IP fees at the expense of the former. Competition between business models is a positive phenomenon that should be encouraged. Regulation to the benefit of one business model over another with royalty rate caps, for example, would stifle competition and innovation.


Minimising prices is not the be all and end all – for corporates or consumers – with other factors (such as features, performance, functionality, flexibility to upgrade services, and support) also very important.  One interesting observation among panellists at the workshop was that, in some cases, would-be licensees would rather sign a royalty-bearing license than commit to other onerous conditions demanded in royalty-free licensing. Whereas consumers typically avoid paying more than single digit percentages over the odds for commodities such as petrol and electricity supply, they frequently choose to pay a significant premium for the most innovative products and brands. For example, Apple’s iPhone has commanded a particularly high wholesale price of around $600 (around double average selling prices for smartphone companies RIM and HTC) and a gross profit margin approaching 60% on the strength of those factors. Typically, the price is heavily subsidised by mobile operators, but consumers pay over the life of their service contracts. Apple’s profits fuel its spectacular innovation machine that has led to entirely new product categories with its iPods, iPhones and iPads in music devices, smartphones and tablets respectively, and that has created the supporting ecosystem with iTunes, its App Store and thousands of developers. Apple’s high margins since introduction of the iPhone in 2007 have attracted plenty of competition, as illustrated in the following section, with “me too” and differentiated products at lower prices for those who are price sensitive. This also exerts downward price pressure on Apple. 
Sister Act 
The FTC’s sister agency, the Federal Communications Commission, provides plentiful evidence that consumers are served very well with diverse choice in suppliers, handset models and with innovative new offerings in smartphones. 
The FCC’s fourteenth Annual Commercial Mobile Radio Service (CMRS) Competition Report, published one year ago, ‘examined, for the first time, competition across the entire mobile wireless ecosystem, including an analysis of the “upstream” and “downstream” market segments, such as spectrum, infrastructure, devices, and applications’. The fifteenth report, recently published, “follows the same analytical framework”. In this, it shows how consumer choice in handset devices has increased significantly in recent years. According the FCC’s latest report: 
 From 2006 to 2010, the number of mobile wireless handset manufacturers that distribute in the U.S. market increased from eight to 21 [see Exhibit 1]. As of June 2010, these 21 handset manufacturers offered a total of 302 handset models to mobile wireless service providers in the United States. Eleven of these handset manufacturers offered at least ten handset models each.
Exhibit 1 Handset Manufacturers and Handset Models Offered, U.S., 2006-2009 



 
  Source: FCC, 2011 
On the important matter of innovation, the FCC goes on to state: 
Over the past three years handset manufacturers have introduced a growing number of smartphones with the following features: an HTML browser that allows easy access to the Internet, an operating system that provides a standardized interface and platform for application developers, and a larger screen size than a traditional handset. In contrast to traditional handsets with applications that include voice and messaging, smartphones have more user-friendly interfaces that facilitate access to the Internet and software applications. Ten handset manufacturers offered a total of 144 smartphones in June 2010, compared to 56 in June 2009. [Exhibit 2] lists the top five smartphone and handset manufacturers, by number of models offered, that distributed in the United States in June 2010.
Exhibit 2: Smartphone Manufacturers Offering Largest
Number 
of Smartphone Models (U.S., June 2010)    
 
Source: FCC, 2011 
The total number of 230.7 million handsets sold in the year to Q2 2010 is quite remarkable, given a US population of 309 million. Exhibit 3 shows quarterly U.S. handset shipments by manufacturer. With subscriber penetration exceeding 100%, the vast majority of Americans already have a phone. Proven consumer desire to keep trading-up, so frequently and extensively with new and additional devices, flies in the face of arguments that IP prices are causing consumer prices to be excessive and not providing value for money with the costs of technology development.
Exhibit 3 U.S. Handset Shipments, Q2 2009 – Q2 2010 
 
 Source: FCC, 2011 
What consumers want and how they are able to get it 
As indicated in my previous IP Finance postings, essential IP costs are modest in comparison to the total spent by consumers on mobile communications. However, value derived by consumers from these proprietary technologies is enormous. Whereas technology developers only deserve to reap financial rewards on essential IP technologies that are actually selected and used with commercial success downstream, if and when this occurs, it is quite legitimate that financial returns on these alone should be large enough to cover risks and costs of investing in portfolios of developments. Otherwise, such investments will simply dry up because technologists cannot reliably predict the “winners.” Portfolios will include both technologies that succeed and those that fail technically, are not selected for standardization, or fall short commercially in the marketplace with poor overall demand or in face of competition from alternatives. Competitors with a variety of business models including upstream licensors and vertically-integrated manufacturers generate these returns in different ways, including licensing fees and through profits on product sales. Consumers want improving capabilities, quality and value for money in the devices they buy, and they are willing to pay a fair premium for such value".

Tuesday, 5 July 2011

Fixing IP Prices with Royalty Rate Caps and Patent Pools

This is the fourth in a series of features written by Keith Mallinson (WiseHarbor) for IP Finance. In this piece, Keith contrasts different structures for establishing the price paid for use of IP in the context of essential standards and concludes that, while voluntary patent pools have sometimes had beneficial results, pools should never be imposed because their imposition would eliminate significant competition from originates from outside pools; mandatory pools with royalty caps would both be anticompetitive and impede competition.  
"Fixing IP Prices with Royalty Rate Caps and Patent Pools

Whereas voluntary patent pooling is common in licensing standards-essential IP for digital audio and video, attempts to impose pooling on licensing complex products, which include multiple standards and many more patents, are ill-suited and potentially anticompetitive. Some companies may voluntarily form patent pools for any particular standard, but mandatory patent pools seeking to limit licensing fees would distort competition by favouring downstream licensees at the expense of upstream licensors who depend on licensing fees to fund their R&D. IP owners, including vertically-integrated companies which combine downstream product businesses with upstream technology licensing, generally prefer bilateral agreements for IP-rich products such as mobile phones. Unlike patent pools, bilateral licenses most frequently include technologies for several standards and other IP, whereas each pool may only include essential patents for just one standard. Technology and market developments are best when competition facilitates various business models and licensing practices. And that also benefits consumers.
Licensing Cartels: From Monopoly to Monopsony 
There is a long history of patent pools being used to monopolise marketsexcluding competitors and controlling prices in several cases. 
Adam Smith and others typically depict price fixing as conspiracy against the public to raise prices. However, there is another way to fix prices: collusion to reduce prices paid to suppliers. Forcing technology input prices lower would starve upstream technology developers of the profit margins required to sustain employment, reinvestment and their output in technology development. Ultimately this would be to the detriment of consumers who benefit from rapid and dynamic innovation in ICT and elsewhere.  Reduced licensing fees do not guarantee lower consumer prices. With concentration in supply downstream, manufacturers may take the savings in profits. 
Nevertheless, calls for mandatory or strongly encouraged participation in ICT patent pools are an increasing trend—typically from downstream licensees and their customers—with the self-serving objectives of limiting their input costs. Some well-intentioned policy makers also mistakenly regard patent pools as a panacea for supposed problems with complex patent landscapes and patent quality.
In-licensing requirements highest among those with most IP 
Manufacturers with little or no IP and vertically-integrated companies with extensive IP are all dependent on in-licensing for most IP required in today’s ICT products, such as mobile phones. Technology ecosystems are complex webs including those who create new technologies and those who implement them in products. No handset manufacturer has declared more than a small minority of the IP required to implement 3G cellular. Technologies developed by scores of different companies are shared in implementation by hundreds of downstream manufacturers.

Exhibit 1, based on data from a 2009 study funded by Nokia, shows that leading implementers Ericsson, in radio network equipment, and Nokia, in handsets, declared IP ownership amounting to 16% and 14% respectively of the total for 3GPP mobile communications standards with WCDMA. Leading technology and chipset provider Qualcomm declared 26% ownership. (Many have claimed the study methodology is flawed. The input data is used here to demonstrate the well accepted fact that many companies have patents related to these standards).
With the need to in-license most essential IP, it is no surprise—with self-interest rather than altruism— manufacturers and their downstream customers (mobile operators who in many cases subsidise handset prices to consumers) have striven to limit aggregate licensing fees.  A common proposal from several mobile operators is to limit aggregate essential-IP charges by establishing an LTE patent pool with that specific objective.  For example, would-be pool administrators Via Licensing and SISVEL have promoted themselves and pooling over the last two years by scaremongering about the threat of so-called royalty stacking. In one presentation,  Sisvel nonsensically projected WCDMA royalties at twice average wholesale prices.  I analysed aggregate royalty levels in my last posting here and concluded that aggregate fees are modest and merited by those that invest significantly in risky R&D. 
The European Commission DG Comp’s Draft Horizontal guidelines recognise that vertically integrated companies that both develop technology and sell products "have mixed incentives". Companies with a significant share of a downstream manufacturing business generally face higher costs in licensing fees for the IP they do not own than they can generate in licensing fees from the IP they do own.  This explains the 2008 attempt by Alcatel-Lucent, Ericsson, NEC, NextWave Wireless, Nokia, Nokia Siemens Networks and Sony Ericsson to cap below 10% aggregate royalties for handsets implementing the 3G/4G LTE standard, as described in my previous IP Finance posting. 
Proposed caps are for aggregate maximum rates to be paid for all standards-essential patents owned by all patent holders. However, in practice, net royalty payments are zero or are minimized among vertically-integrated companies who cross-licence, with or without a cap – so a proposed cap would have little or no impact on licensing costs among such companies. The latter would greatly benefit from any reduction in upstream licensors’ fees—payable by all licensees—whereas, any squeeze on their own charges would only be significant in the minority of the market where they are not cross-licensing to minimise or eliminate net payments.  A manufacturer’s IP fee income is generally small compared to its product revenues. 
IP licensing, before and after imposition of an aggregate royalty cap, is depicted in Exhibits 2a and 2b respectively. In this simplified yet representative model, 75% product market share (applicable for handsets sold in 2010) is supplied by vertically-integrated manufacturers who minimise royalty charges among themselves. Product markets are predominantly supplied by those who hold significant essential IP—even excluding Apple, RIM and HTC who had no essential IP until after 2006, according to the source used in Exhibit 1. Manufacturers with the largest patent holdings also tend to have the largest shares of the downstream markets for which they need to license-in most IP. Smaller manufacturers with significant IP have negotiating leverage over larger players because the latter need licensing for relatively large shares and revenues in product markets. The remaining manufacturers, without IP, who account for the other 25% of market share, instead pay fees for all IP licensing required. Upstream licensors charge fees to all manufacturers downstream to fund R&D investments. Also consistently with declared IP ownership in Exhibit 1’s source, it is assumed that manufacturers without IP to trade make one third of their out-payments to upstream licensors and the remainder to vertically-integrated players. As an example, the royalty cap modelled is an arbitrary reduction of one third to the aggregate royalty rate (as a percentage of handset prices). Total licensing fees paid, received, and reduced are proportional to the areas of the various coloured blocks on the two diagrams. 

The result is that aggregate royalty rate caps save money for all downstream manufacturers at the expense of upstream licensors. Downstream manufacturers with no IP to trade save most significantly. In this model, vertically-integrated companies lose some revenue, but save significantly more in reduced expenses. For every dollar of licensing revenues they lose through any capping, they save $1.50 in licensing out-payments to upstream licensors. Licensing fees to upstream licensors from all manufacturers fall in the same proportion. 
Fish too big for the pool 
Several voluntary patent pools established in the last decade or so have been quite successful. They have attracted many firms to join as licensees. This collective out-licensing is efficient because the pool administrator can serve as a distribution channel for many licensors and as a one-stop-shop, subject to the pool standard’s limited scope and IP contributed, for licensees. Research reveals that recent pools for audio and video codec standards-essential patents have attracted, in most cases, the majority of the standards-essential patents for those standards, including MPEG-4 with 34% of firms that have applicable patents contributing 89% of the required patents. This research also concludes that while a number of vertically-integrated companies who manufacture products implementing the standards are most inclined to join, many vertically-integrated and upstream essential-IP owners decide to stay out. Some IP owners find they can derive more value from bilateral licensing and cross licensing, or that pools do not provide sufficient freedom to pursue and defend their downstream businesses.  Specific concerns include:
  • The difficulty of determining how to share pool profits with thousands of patents, uncertainties around essentiality and the relative values among patents;
  • Differing business models with upstream licensors and vertically-integrated manufacturers holding major proportions of essential IP;
  • Asymmetries in patent ownership among these manufacturers and versus upstream licensors;
  • The need to license devices for multiple standards with 2G, 3G, 4G, video, audio and for other technologies outside of the standards; meaning that bilateral deals, which can encompass all of a company’s IP, are always going to be necessary, and are more flexible;
  • The need to resolve significant patent litigation with fierce competition between vertically-integrated manufacturers and other end-user product manufacturers without standards-essential IP.
This is mostly achieved through bilateral settlements which likely would be extremely difficult if the companies had agreed to, or been forced into, patent pools.Pooling IP would surrender control of this most strategic asset for several major players; and mandatory pooling would expropriate this valuable private property. For example, it could have limited Nokia’s ability to sue Apple for significant licensing fees in 2009, based upon Nokia’s standards-essential WCDMA patents, and then expediently agree to settle for cash in face of counter-suits and deteriorating Nokia finances with a profit warning most recently. In contrast, the 3G Licensing pool has never sued for patent infringement. While announcing settlement of patent infringement litigation with Apple, Nokia’s CEO, Stephen Elop, stated that Nokia’s cumulative R&D investment during the past two decades was Euro 43 billion ($60 billion). This is largely justified by sales of its own products and by minimising aggregate royalty out-payments, stated to be less than 3% gross to 2007, through bilateral licensing. Fees to be received in the cross-licensing settlement with Apple–now with revenue share close to market leading levels of Nokia and Samsung–were not disclosed. Whereas Google does not manufacture anything, HTC and Samsung are being sued by Apple for infringement, of patents that are not essential to the mobile standards, by their smartphone devices employing Google’s Android operating system. Google made a stalking-horse bid of $900 million for a portfolio of 6,000 patents, including essential IP, from bankrupt Nortel. The patents would have had great defensive value to Google, who makes its money from advertising in search on PCs and phones using its software and services, but has a limited patent portfolio. However, a consortium of Apple, Microsoft, Sony, Research In Motion, Ericsson, and EMC obtained Nortel’s patents for $4.5 billion.  The consortium rules are unknown publicly, but presumably the members will be able to use the portfolio defensively in bilateral license negotiations and litigation settlement discussions.
Absent (misguided) regulatory fiat, there is no reason why an LTE pool would become any more significant than the unsubstantial and struggling WCDMA pool. Attempts in the early 2000s by the 3G Patent Platform Partnership (set up by some telecom companies as a voluntary pooling arrangement) to regulate 3G IP fees with collective licensing and a “Maximum Cumulative Royalty Rate” of 5% were unsuccessful. The WCDMA patent pool includes mainly mobile operators and Japanese manufacturers. It covers only around 10% of patents declared by the patent holders to be WCDMA standards-essential. Multimode, multi-media devices (e.g., smartphones, 3G tablets) are incorporating increasing numbers of cellular and other standards. Proposed LTE patent pools have also made little progress over the last couple of years for all of the same difficulties faced by the 3G patent pools.
No panacea
Manufacturers, including the vertically integrated with significant IP, have self-serving incentives to cap aggregate royalties. Caps would reduce downstream product licensing costs significantly more than they would reduce licensing revenues for the latter. However, these companies tend not to favour patent pools for other reasons. Unfortunately, the significant shortcomings are not recognised by many policy makers who mistakenly see patent pools as a panacea to solve supposed problems with complex patent landscapes.  Voluntary patent pools have been beneficial in some cases, but patent pools should never be imposed because this would eliminate significant competition that comes from outside of pools.  Mandatory pools with royalty rate caps would be anti-competitive and impede innovation".