Showing posts with label federal circuit. Show all posts
Showing posts with label federal circuit. Show all posts

Wednesday, 8 January 2020

How innovative, competitive and well adopted was 4G LTE in mobile communications— implications for outlook in 5G?

LTE's introduction a decade ago and its development as the definitive 4G mobile communications standard which predominates in smartphones is an outstanding accomplishment. Competition has served technology innovators, manufacturers, mobile network operators (MNOs), over-the-top service providers and end-users extremely well.  Markets have functioned and advanced superbly with a vibrant supply ecosystem and providing 4.1 billion LTE connections out of 9.4 billion in total worldwide. In the U.S., 63 percent of the nation’s 479 million mobile connections use LTE.
Despite overwhelming evidence of this extraordinary and widespread success, some allege that illegal and anticompetitive practices have caused significant harms including suppressed innovation, market exclusion and excessive pricing. While legal arguments and economic theories are extensively articulated by the parties and their amici in the U.S. Federal Trade Commission’s antitrust action against Qualcomm—with this case still on appeal following the Northern California District Court’s ruling against the latter—my analysis here focuses on market and economic facts and figures in innovation, competition and consumer welfare over the last decade with LTE. While there is no evidence of those negative effects, there is proof of commercial failure by the alleged principal injured party, Intel, due to its poor strategic judgment and inability to keep up with the exacting technical pace of a most fiercely competitive marketplace in smartphone chips.
Antitrust law is to ensure competitive processes are preserved, not that competitors are protected. High prices are not per se illegal because they provide incentive for increased competition, such as from new market entrants and lower-cost innovations. Suppliers that are inefficient in terms of costs, quality or speed-to-market versus competitors should not be protected from their failings.

Every new decade, a new G

A new generation of mobile technology is introduced approximately every 10 years. As the new decade turns, it is most opportune to assess how well LTE has exceeded all expectations, and what has made this possible, since its first introduction around the turn of the previous decade.  Were concerns about introduction of yet another new G—including the need to invest in a network overlay, more spectrum, replace devices and pay additional patent license fees—well founded or needless?
Many MNOs, particularly in Europe, were very disappointed with their transitions to 3G in the early 2000s, due to high spectrum costs and initially disappointing demand for new data services. Conversely, in the US, AT&T waited until availability of mobile broadband with HSDPA in 2005 and deployed this on its existing spectrum. With exclusivity over iPhones in the US, its network became overloaded and in dire need of capacity expansion by around the end of the decade.
The very first commercial launches of LTE were in Scandinavia by TeliaSonera in late 2009. Following several more launches in 2010, the new standard was most significantly established with its introduction by Verizon at the end of that year and by AT&T in 2011. Both of those MNOs largely deployed LTE initially in new spectrum at 700MHz. It provided great coverage, together with much improved data speeds and network capacity. That was just the beginning for LTE.

Consumer demand surges with smartphones and LTE

While press and consumer attention in the smartphone and mobile broadband revolution over the last decade or so is mostly with device original equipment manufacturers (OEMs) including Apple and Samsung, the increases in communications performance have largely been down to others in their technology development and through chip component and network equipment supply, together with network deployments by the MNOs.
While mobile broadband data initially grew from a low base at a fast rate using 3G technologies CDMA EV-DO and HSDPA from the mid 2000s—with most demand from PC data cards and dongles— that exponential trajectory has been maintained with data growth compounding at around 60 percent or more annually for the last decade.

Mobile broadband data consumption has grown enormously in recent years

This was significantly due to the rapid adoption of smartphones following the introduction of the iPhone 3G and the first Android operating system device in 2008. Smartphones embodied a variety of innovative new technologies including applications processing, displays and sensors. Improved communications with LTE, in conjunction with an increasing supply of licensed spectrum for mobile, were perfectly placed to accommodate demand growth. The first Android smartphone with LTE was launched in 2010 and Apple’s first LTE smartphone was the iPhone 5 in 2012. It took less than a decade for smartphones to overwhelmingly substitute for feature phones.

Smartphones predominate in U.S. handset purchases since 2011

Market dominance and concentration in supply

Other measures commonly used to assess economic efficiency in antitrust investigations also indicate that mobile technology markets are healthy and dynamic.
Some industries are inherently and necessarily highly concentrated. For example, Boeing and Airbus have a duopoly in supply of large commercial aircraft. The number of suppliers and the relative positions among them reflect industry economies of scale, barriers to entry, strategic focus and competitive strengths in execution with customers purchasing largely based on technical specifications, cost and delivery performance. Trends in market concentration over several years are very informative about how market competition is developing.
The supply of mobile handsets including smartphones has remained unconcentrated for many years because merchant supply of highly standardized components and open standards in cellular technologies have reduced barriers to market entry to low levels. In the 2000s, Nokia dominated with a vertically integrated supply chain, up to 40 percent market share in handsets and even higher in the high-end devices that were precursors to modern smartphones. Since smartphones became mainstream in the 2010s, there have been many new market entrant OEMs and the positions of some leading incumbents including Nokia and BlackBerry have collapsed due to competition.
Concentration is inevitably rather higher in digital baseband modem chips than in mobile phones, because supply is rather different than in handsets including much higher barriers to entry with R&D requirements and economies of scale in product design and production. While some modem chip vendors have exited the marketplace in the last decade, MediaTek’s share of LTE modem chip sales rose to 24 percent in 2016 before falling with significantly rising shares for vertically integrated suppliers Samsung and Huawei with its HiSilicon division. Large shifts in market share away from leaders and rapid reductions in concentration indicate intense competition.
The extent of concentration in supply can be quantified by reference to the Herfindahl-Hirschman Index, a widely accepted measure of market concentration in competition analysis. The HHI is calculated by summing the squared market shares of all firms in any given market. U.S. antitrust authorities generally classify markets into three types: Unconcentrated (HHI < 1,500), Moderately Concentrated (1,500 < HHI < 2,500), and Highly Concentrated (HHI > 2,500).
High concentration in LTE modem chip supply was very transient. Concentration in new market segments is likely to be high as the first few suppliers enter. Between 2013 to 2016, LTE modem chip supply concentration trended down to lower levels than in the 3G UMTS, 2G GSM/GPRS and 3G CDMA modem chip segments. LTE supply concentration has fallen to a Moderately Concentrated level and Qualcomm now accounts for less than 40 percent share. In contrast, UMTS (i.e. WCDMA/HSDPA) and GSM/GPRS/EDGE modem chip supply concentration has increased as MediaTek’s shares have grown to exceed 50 percent in each of these market segments while Qualcomm’s shares have diminished to only a few percent in UMTS and zero percent in GSM/GPRS/EDGE. While the FTC also alleges that Qualcomm has illegally dominated CDMA chip supply, since 2017 it is VIA Telecom (acquired by Intel in 2015) that has the highest share of this market segment and largely accounts for the high and increasing HHI in this market segment.

Market concentration in supply of baseband modem chips and handsets including smartphones

Qualcomm has excelled in bringing the latest advanced features to market most rapidly, as has MediaTek with mid-range, low-cost solutions and VIA Telecom has focused on CDMA.

A lot more bang for your buck

Meanwhile, consumer prices—measured in dollars or whatever currency prevails nationally per gigabyte of data—have fallen dramatically to a small fraction of levels around the turn of the last decade, as is evident in the US. This has been due to the low costs of LTE technology and fierce competition throughout the value chain.
Source: Qualcomm’s Opening Statement presentation, p29, at trial on April 16, 2019. In Re: Qualcomm litigation Case No. 3:17cv0108-GPC-MDD (S.D. Cal.)
“I skate to where the puck is going to be, not where it has been”—Wayne Gretzky
Surviving, let alone winning in industry sectors with rapid technological change and major investment requirements is not easy. Sound strategic and commercial judgment as well as a modicum of good luck are as important as technical competence. Intel’s various incoherent forays in cellular chips make a pertinent case study in strategic failure, not of abuse by a much smaller company.
Each generation of mobile technology is commonly portrayed and perceived—particularly in hindsight—as a single entity. However, with a new 3GPP standard release every year or two, LTE was first specified in Release 8 (2009) and then improved with increased functionality and performance six times before 5G was first standardized in Release 15. Whereas LTE and 4G are now universally regarded synonymous, it was only with Release 10 (2011) that LTE became compliant with International Telecommunication Union’s IMT Advanced specifications which are generally regarded as defining 4G. LTE Advanced Pro in Release 13 (2016) was another significant performance upgrade milestone.
Numerous technological improvements in LTE’s introduction and continuous development have increased spectral efficiency, spectrum reuse, data speeds, network capacity, reduced latency and also provided entirely new capabilities. Improvements include the OFDMA waveform, carrier aggregation, MIMO, advanced channel coding, higher order modulation, use of unlicensed spectrum and improved positioning technologies.
Standards setting organizations (SSOs) map out, for all to see, which new features will be introduced in each new standard release. That is very helpful for product developers, but so much resulting from the collaboration among SSO participants and appearing in the standards means chip and network equipment vendors are chasing multiple moving targets. The general direction of travel might seem obvious in hindsight, but fast pace and good judgment with selection and commitment to the most important improvements are essential. Some features turn out to be much more important than others. While device OEMs design and manufacture smartphones, it is largely the modem chip vendors and network equipment OEMs that have developed and supplied the technologies and products that implement or enable MNOs and users to benefit from latest standard-based improvements.
Leaders must not only be the fastest to invent and bring to market, they must also know where and when to place their big bets. Those that make the wrong call will suffer significant adverse consequences with exacting requirements from OEMs and their MNO customers.

Self-harm

While Intel its portrayed as the major injured party in the FTC’s case against Qualcomm, Intel failed in modems for several significant reasons at Apple and elsewhere, despite its deep pockets, position as a leading semiconductor chip designer and silicon fabricator. It even squandered the advantages of its incumbency as the sole modem chip supplier to Apple for iPhones and iPads from 2007 until 2011, while also being, in that period and continuing to be ever since, Apple’s sole supplier of CPUs for its Mac computers.
Intel failed to recognize the (mis)match between what it was pushing and what OEMs wanted.  It foreclosed itself from all but a relatively small proportion of the LTE modem chip market segment. Most smartphones include chips that integrate the baseband modem processor with an application processor that is based on the ARM instruction set and architecture. There was never a distinct “thin modem” market—in the sense of defining a relevant market for competition purposes. Modem suppliers need to address the entire market segment of modem supply—including thin and integrated modems—to be efficient in development and production of technologies and products. The proportion of thin versus integrated modems in smartphones has fallen from around 40 percent in 2011, when most smartphone OEMs were just getting started, to only teens of percent in the last few years.
Intel has offered no ARM-based application processor since it sold its XScale business to Marvell in 2006. It failed in its alternative strategy with attempts to get its “Intel Architecture-based [X.86] processors” adopted in smartphones and tablets. Its x.86-based Atom application processor was uncompetitive for many reasons including higher power consumption and its inferior supply ecosystem with higher costs for the associated components needed to support the chip. Intel fared poorly despite spending billions on subsidies in its attempts to build a mobile device beachhead in tablets. It never achieved any more than a small share of supply to tablet OEMs and no more than a trivial share of supply to smartphone OEMs.
Intel captured Apple, as Apple’s sole 3G modem supplier for iPhones, when Intel re-entered the market with its acquisition on Infineon’s cellular chip division in August 2010.  However, Infineon would have known by then— as Intel should have also known through its acquisition due diligence, if that had been carried out thoroughly and competently—that modem business was about to be lost with the upcoming February 2011 launch of an iPhone 4 model based on a Qualcomm chip. Intel’s other 3G thin modem customers included Samsung and Huawei that subsequently have significantly switched to vertically integrated supply. Apple aside, Intel’s share in LTE supply was never more than a percent or two. Bad luck or poor market intelligence, judgment and execution?

Intel was too late in finding its voice

Having been ejected from Apple in 3G, Intel was very anxious to get back in there with LTE. But it failed to keep up with the pace of standard-based developments in LTE. Intel was late with LTE-Advanced (i.e. actual 4G) improvements and was at least two years late in being able to offer voice over LTE (VoLTE). LTE had no voice capability before VoLTE was standardized. Leading mobile operators—including AT&T and Verizon in the US—demand certain features in devices to exacting schedules.  For example, with major operators including T-Mobile US and Verizon launching VoLTE services by 2014, they were insisting on VoLTE in new phone models beforehand. This was significantly driven by their desires to seed the market for use of the new service and so that they could shut down older-generation networks, such Verizon’s 3G CDMA network by the end of 2019. Despite the above efforts, this date has slipped to 2020 to avoid leaving customers with phones that cannot make phone calls.
Many devices are used on networks for more than five years following new model introduction. Popular models are commonly sold for more than three years before being withdrawn from sale. For example, Verizon is still selling the iPhone 6s (2015) and Galaxy S7 (2016). The last of those sold are likely to be used for another few years before being retired.
It was not until 2016, with chip supply for launch of the iPhone 7 in September that year, that Intel could meet voice specification requirements of Apple and its MNO customers in LTE. In contrast, Metro PCS launched VoLTE with the LGE Connect 4G in January 2012 and VoLTE was incorporated in the iPhone 6 (September 2014). Qualcomm LTE modems were included in both devices.

What was he smoking?

In addition to strategic conflicts, Intel also suffered from delusions at the highest level. For example, despite Intel not being able even to do voice in LTE, in 2016, former Intel CEO Brian Krzanich proclaimed that Intel was the leader in 5G, including in modem technology. This was way off the mark. In fact, the main reason Intel exited modem supply, announced by replacement CEO Bob Swan in April 2019, and why Apple settled all its litigation with Qualcomm the very same day, was that Intel could not keep up the required pace and schedule in its 5G technology developments. Apple was clearly fearful it would not be ready to introduce 5G iPhone devices in 2020 without switching back to Qualcomm’s supply.
While the period of Qualcomm’s alleged misconduct is only to 2016, the FTC regurgitates the Court’s contention that Qualcomm will remain dominant in the transition to 5G, but without explicitly alleging any abuse there. Qualcomm has clearly competed on the merits in establishing itself as the leader in 5G modem chips. With a new air interface and addition of mmWave bands (i.e. with high-band frequencies at 24 GHz and abo)  its astute competitive strategy has included unmatched technology development in modems and acquisition in RF front-end components.

Voodoo economics II

The FTC’s most significant but hotly contested theory of harm, and that the district court has accepted, is that Qualcomm’s royalty charges to OEMs impose a “surcharge” on chip competitors that limits their ability to invest in R&D and makes them unable to compete on the merits such as in technical performance. Why the royalty charge is any different to any other necessary input cost—such as that for the display or battery components—is a mystery. OEMs are charged royalties non-discriminately regardless of modem supplier. According to the FTC’s allegations, and despite evidence to the contrary, Qualcomm’s royalty charges are excessive and are only paid because it supplies “must have” chips and has a “no license, no chips policy.”
That theory suggests that elimination of the alleged surcharge should enable a chip vendor to become competitive. However, Intel still failed despite that supposed relief. It commenced LTE modem chip supply to Apple for the iPhone 7 in 2016 and was the sole modem supplier to Apple for all subsequently launched models, including the iPhone X (2018) and iPhone 11 (2019). By April 2017, royalties paid to Qualcomm on Apple products, including those with Intel’s chips, had ceased and were not resumed until April 2019.
Rather than capitalizing on this window of opportunity, Intel failed on the merits, as indicated by its chip market exit, despite this non-payment of any royalties including the alleged surcharge to Qualcomm. While Intel’s dollar expenditures on R&D had been increased, R&D decreased as a percentage of its rising sales (i.e. including new sales of modems to Apple): $12.7 billion (21.4 percent) in 2016, $13.0 billion (20.8 percent) in 2017 and $13.5 (19.1 percent) in 2018. Based on the FTC’s economic theory, Intel supplying Apple should have had lower costs than other chip suppliers whose customers were still paying Qualcomm royalties. LTE modem chip market segment shares for Huawei (HiSilicon) and Samsung, that also buy Qualcomm chips and pay it licensing fees, have continued to increase since 2016.

Be careful what you wish for and be grateful for what you have

There will always be prophets of doom and self-serving interests who predict harms such as market failures if changes are not made. The kinds of accusation made by the FTC about Qualcomm in LTE echo those made against Qualcomm in UMTS, just before mobile broadband with HSDPA took off and before those charges were dropped  by the European antitrust authorities.
Prior to the introduction of LTE and for several years subsequently it was alleged that royalty stacking would make the technology prohibitively costly, particularly since LTE royalties would stack on those that had to be paid in multimode equipment including 2G and 3G.
A royalty stack never appeared in 3G and in never appeared in 4G. The only harms are the contentions over patent royalties that are costing a lot in legal fees and are enabling implementers including Apple and others to “efficiently infringe” by holding out from payment while enjoying the benefits of rich standard-based technologies.
As I explained here last month and previously, with patent licensing fees paid less than five percent of handset prices, such costs are dwarfed in comparison with the value that has been created with annual revenues of around half a trillion dollars in handsets, more than a trillion in operator services plus huge revenues to the over-the-top players that have flourished over the last decade in the smartphone and mobile broadband revolution with LTE.
Uber, Instagram, FaceTime and Netflix all launched in 2010 and have, among many other OTT providers, significantly benefitted from LTE’s mobile broadband capabilities. For example, Netflix has enjoyed a 4,000 percent stock rally with its streaming services significantly used on mobile devices. Smartphone OEMs have benefitted from these services because users want devices that can best access these services. Mobile operators benefit because they generate mobile broadband service revenues even from “free” services that are delivered on top. These services are transforming the way we work and play with daily hours of smartphone usage even exceeding TV watching.
Significant ongoing development has been required since introduction of LTE and the first 4G technologies. Whereas coverage and capacity were easily established with the deployment of additional spectrum at 2 GHz and below, there is nowhere near enough spectrum available there to satisfy escalating mobile broadband capacity demands. New technologies including Massive MIMO antenna arrays and HPUE to increase device uplink radio transmission performance in the latter LTE releases and in 5G are expanding capacity by better exploiting frequencies above 2 GHz. 5G has been designed to access mmWave bands with orders of magnitude more bandwidth than is accessible with previous generations of technology. All this, yet alone what is yet to come with URLCC and mMTC in the Internet of Things, would not be possible without major ongoing R&D investments. These should not be taken for granted—particularly in the race to establish and maintain global leadership and national security in 5G.
This article was originally published in RCR Wireless in a very similar form on 7th January 2020.

Keith Mallinson is a leading industry analyst, commercial consultant and testifying expert witness. Solving business problems in wireless and mobile communications, he founded consulting firm WiseHarbor in 2007.

Wednesday, 16 October 2019

Patent Eligible Subject Matter Reform in the United States: The Pendulum Will Swing Too Far (again)?


In the United States, the law of patent eligible subject matter has become a big mess.  There are many different ways to frame how we got to this point.  One narrative tracks the concern with so-called patent trolls and the issuance of poor patents by the USPTO.  For sure, many have had concerns about the enforcement of patents and there have been a number of issued poor quality patents by the USPTO.  Part of the problem with the reform effort may have been that there were just too many proposals adopted to confront the issues.  In a perfect world, I suppose that a proposal would be enacted and then we would gather data and try to assess its impact.  We essentially made many policy changes creating perhaps an even larger mess with different problems--perhaps to the detriment of innovation.  Indeed, perhaps the changes to patent eligible subject matter law by Alice and Mayo may have gone too far—in light of other changes to the U.S. patent system designed to curb troubling enforcement and poor patent quality.  


One of the main current problems seems to be the application of the Alice/Mayo test and the failure to achieve consistency in its application.  Unfortunately, one casualty of Alice/Mayo may be collegiality amongst U.S. Court of Appeals for the Federal Circuit judges and the institution itself—the recent opinion, American Axle & Manufacturing v. Neapco Holdings, issued on October 3, 2019 is an interesting example.  Basically, Judge Moore writing the dissent is accusing Judge Dyk, author of the majority opinion, for engaging in judicial activism among other things.  I believe that most think that judges should “call it as they see it.”  However, the problem may be the test itself—it’s just too difficult to apply with consistency and that a reasonable application could result in problems, particularly if there is an underdeveloped record.  For sure, the attempt to utilize patent eligible subject matter as a way to eliminate cases early has been eroded by some panels of the Federal Circuit.  Does this mean that the main value of the test has been lost?  Should we stick to obviousness as the gatekeeper of patentability?  Can Congress actually fix this without overshooting eligibility resulting in more and different problems?  What about the concern with drug pricing?  Is the current test for patent eligible subject matter unfixable?  Do we need to think harder about different patent eligible subject matter rules for different industries?  Does it look like the Federal Circuit is properly using doctrines as policy levers across different industries?  I do think we would likely agree that keeping the Federal Circuit is a good idea (there are some that disagree).  

Tuesday, 18 June 2019

U.S. Court of Appeals for the Federal Circuit: Public Universities do not have Sovereign Immunity from Patent IPRs


In a new U.S. Court of Appeals for the Federal Circuit opinion, Regents of the University of Minnesota v. LGI Corporation, et al., the court held that states, including public universities, are not entitled to sovereign immunity from Inter Partes Review (IPR) proceedings filed at the United States Patent and Trademark Office (USPTO) to challenge an issued patent.  Judge Dyk, writing for the court, provides a nice overview of the history of administrative challenges to issued patents as well as the process for filing and prosecuting an IPR.  Notably, Judge Dyk points to the resource constraints of the USPTO in evaluating patentability and that the federal government is essentially drafting third parties through IPRs to test patentability.  Judge Dyk discusses and relies upon the reasoning of Saint Regis Mohawk Tribe v. Mylan Pharmaceuticals Inc., 896 F.3d 1322 (Fed. Cir. 2018).  In that case, the Federal Circuit refused to apply tribal sovereign immunity to IPRs.  The court notes that it was unnecessary to reach the issue whether the University of Minnesota waived sovereign immunity for an IPR by filing a patent infringement suit concerning the IPR challenged patent.  This decision puts U.S. public university generated and owned patents in the IPR crosshairs.  Interestingly, it puts U.S. public university patents on the same footing as foreign university owned and generated U.S. patents for purposes of challenge through IPRs, thus, removing a potential advantage for U.S. public universities versus foreign universities in the United States.  

Wednesday, 20 June 2018

Judges Lourie and Newman of the Federal Circuit Critique Alice/Mayo and Myriad


The U.S. Court of Appeals for the Federal Circuit, in AatrixSoftware, Inc. v. Green Shades Software, Inc., recently denied a rehearing en banc concerning two cases that may make it more difficult to dismiss a claim challenged for lack of patent eligible subject matter under the Alice/Mayo test because of factual issues.  This leaves intact the ability of counsel to raise factual issues which may avoid early resolution of a patent infringement action on patent eligible subject matter grounds.  Notably, Judges Lourie and Newman, both of whom have graduate degrees in technical fields and are very experienced members of the Federal Circuit, requested in a concurring opinion that the U.S. Congress revisit patent eligible subject matter, particularly in light of the Alice/Mayo test and the U.S. Supreme Court “abstract idea gloss.”  Judge Lourie states:

The case before us involves the abstract idea exception to the statute.  Abstract ideas indeed should not be subject to patent.  They are products of the mind, mental steps, not capable of being controlled by others, regardless what a statute or patent claim might say.  Gottschalk v. Benson, 409 U.S. 63, 67 (1972) (“[M]ental processes, and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work.”).  No one should be inhibited from thinking by a patent.  See Letter from Thomas Jefferson to Isaac McPherson (Aug. 13, 1813) (“[I]f nature has made any one thing less susceptible, than all others, of exclusive property, it is the action of the thinking power called an Idea.”).  Thus, many brilliant and unconventional ideas must be beyond patenting simply because they are “only” ideas, which cannot be monopolized.  Moreover such a patent would be unenforceable.  Who knows what people are thinking?  

But why should there be a step two in an abstract idea analysis at all?  If a method is entirely abstract, is it no less abstract because it contains an inventive step?  And, if a claim recites “something more,” an “inventive” physical or technological step, it is not an abstract idea, and can be examined under established patentability provisions such as §§ 102 and 103.  Step two’s prohibition on identifying the something more from “computer functions [that] are ‘well-understood, routine, conventional activit[ies]’ previously known to the industry,” Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 134 S. Ct. 2347, 2359 (2014) (alteration in original) (quoting Mayo, 566 U.S. at 73), is essentially a §§ 102 and 103 inquiry.  Section 101 does not need a two-step analysis to determine whether an idea is abstract.   I therefore believe that § 101 requires further authoritative treatment.  Thinking further concerning § 101, but beyond these cases, steps that utilize natural processes, as all mechanical, chemical, and biological steps do, should be patent-eligible, provided they meet the other tests of the statute, including novelty, nonobviousness, and written description.  A claim to a natural process itself should not be patentable, not least because it lacks novelty, but also because natural processes should be available to all.  But claims to using such processes should not be barred at the threshold of a patentability analysis by being considered natural laws, as a method that utilizes a natural law is not itself a natural law.

Judge Lourie also criticized the U.S. Supreme Court’s decision in Myriad Genetics:

[F]inding, isolating, and purifying such products are genuine acts of inventiveness, which should be incentivized and rewarded by patents.  We are all aware of the need for new antibiotics because bacteria have become resistant to our existing products.  Nature, including soil and plants, is a fertile possible source of new antibiotics, but there will be much scientific work to be done to find or discover, isolate, and purify any such products before they can be useful to us.  Industry should not be deprived of the incentive to develop such products that a patent creates.  But, while they are part of the same patent-eligibility problems we face, these specific issues are not in the cases before us.   Accordingly, I concur in the decision of the court not to rehear this § 101 case en banc.  Even if it was decided wrongly, which I doubt, it would not work us out of the current § 101 dilemma.  In fact, it digs the hole deeper by further complicating the § 101 analysis.  Resolution of patent-eligibility issues requires higher intervention, hopefully with ideas reflective of the best thinking that can be brought to bear on the subject.

There are numerous proposals for changing patent eligible subject matter before the U.S. Congress, for example, see the AIPLA proposal, here. 

Thursday, 15 February 2018

Federal Circuit Pushes Back on U.S. Supreme Court’s Alice Decision on Procedure


In a pair of interesting software-related cases, the U.S. Court of Appeals for the Federal Circuit appears to push back on one of the supposed goals of the U.S. Supreme Court’s Alice v. CLS Bank International decision.  In Alice, the U.S. Supreme Court clarified and restated the Mayo Collaborative Services v. Prometheus decision’s test concerning patent eligible subject matter.  In doing so, the Supreme Court started a new era of U.S. patent law which made patent eligible subject matter a very important inquiry with respect to the patentability of inventions, particulary those in the software space—although Alice’s impact is felt in other technological areas.  Since Alice issued, the U.S. Court of Appeals for the Federal Circuit has clarified the Alice test and notably provided guidance to patent lawyers on how to “avoid” or “comply” with Alice. 

Importantly, one of the purported benefits of Alice was to allow for the early dismissal of claims based on patent eligible subject matter.  An alleged infringer could conceivably quickly raise patent eligible subject matter and get a claim dismissed on either a 12(b)(6) motion for failure to state a claim or a motion for summary judgment.  In additional push-back to Alice, the Federal Circuit in Berkheimer v. HP (February 8, 2018) has recently held that even after claim construction a motion for summary judgment on patent eligible subject matter may be improper because of genuine issues of material fact.  While this is standard law concerning motions for summary judgment, the case provides a blueprint for how genuine issues of material fact can be created with patent eligible subject matter.  Because of this possibility of creating that genuine issue of material fact, patentees will have additional settlement leverage to realistically threaten a case through trial—a costly endeavor.  What will the effect of this case be on Alice’s attempt to curb so-called patent troll litigation? 
In another recent case, the Federal Circuit in Aatrix Software v. Green Shades Software (February 14, 2018) remanded a case because the district court did not allow the patentee to amend its complaint to survive a 12(b)(6) motion on claim construction.  While the Federal Circuit was careful to note that a complaint can be dismissed on a 12(b)(6) motion to dismiss, this case cautions district court judges to carefully consider motions to amend complaints. 

It will be interesting to see if the Federal Circuit’s decisions about the procedural challenge of patents based on patent eligible subject matter in the courts will have an impact on the analysis in the pending Oil States case before the U.S. Supreme Court. 

Thursday, 2 February 2017

Trump's Nomination of Neil Gorsuch and Intellectual Property


Recently, President Donald Trump nominated Neil Gorsuch of the U.S. Court of Appeals for the 10th Circuit to the U.S. Supreme Court.  Many have expressed disappointment at the nomination because of his close comparison to the late Associate Justice Antonin Scalia, but it certainly could have been worse to some.  Interestingly, BuzzFeed discusses a survey which finds that based on citations to Scalia opinions there is one justice who is supposedly closer to Scalia of the group considered by Trump: Merrick Garland.  Yes, Merrick Garland, who was former President Obama’s pick. 

What of Neil Gorsuch’s impact on IP should he be confirmed?  There’s some speculation and some have reported that we don’t have enough information.  Interestingly, the Congressional Research Service (CRS) released an initial report yesterday (February 1, 2017) on Judge Gorsuch.  On what area may he have the most impact on IP: his views on executive power and administrative law.  Judge Gorsuch has apparently criticized the Chevron doctrine which essentially provides that courts should provide deference to an agency’s interpretation of law.  The CRS notes:

Nonetheless, at least in one area of considerable congressional interest, Judge Gorsuch’s views on the law could be seen to be quite distinct from those of Justice Scalia: administrative law. For much of his career on the bench, Justice Scalia was a proponent of Chevron deference, the doctrine that when statutory language is ambiguous or silent on an issue, federal courts should defer to an agency’s reasonable interpretation of a statute it administers. He argued that the doctrine operated as a clear, bright-line rule against which Congress could legislate. In contrast, Judge Gorsuch, in a concurring opinion in Gutierrez-Brizuela v. Lynch, argued that Chevron and its progeny allow “executive bureaucracies to swallow huge amounts of core judicial and legislative power and concentrate federal power in a way that seems more than a little difficult to square with the Constitution of the framers’ design.” In so writing, Judge Gorsuch suggested that the Supreme Court should reconsider Chevron, an action which, if taken by the Court, could upend decades of administrative law and potentially alter the role of Congress in drafting laws for implementation by administrative agencies.

Overturning Chevron could move some power from the United States Patent and Trademark Office back to the U.S. Court of Appeals for the Federal Circuit—think Couzzo and the broadest reasonable interpretation standard.  Law Professor Jonathan Turley at George Washington University Law School has been one of the loudest critics of executive overreach—including excesses in the George W. Bush and Obama administrations.  (Hat tip to my colleague, Professor John Sims, for the reference to the CRS report). 

Saturday, 17 January 2015

U.S. Court of Appeals for the Federal Circuit Affirms Two Large Patent Damages Awards: $70 million and $371 million

The U.S. Court of Appeals for the Federal Circuit affirmed two large patent damages awards recently.  The first case is Stryker Corp. v. Zimmer (December 19, 2014), a decision authored by Chief Judge Prost, which affirmed a jury verdict of $70 million in lost profits.  The decision did reverse an award of treble damages for willful infringement.  The second case is another Chief Judge Prost opinion, Bard Peripheral Vascular v. W.L. Gore and Associates (January 13, 2015), which affirms a finding of doubled damages for willful infringement from $185,589,871.02 (jury verdict) to $371,179,742.04.  That is quite a win for former federal appellate court judge and Stanford Law School professor Michael W. McConnell, also of counsel for Kirkland and Ellis

Thursday, 11 September 2014

Bold Proposal on U.S. Patent Reform: Eliminate the U.S. Court of Appeals for the Federal Circuit

The Cato Institute is "a public policy research organization — a think tank – dedicated to the principles of individual liberty, limited government, free markets and peace," which operates the Cato Unbound forum, an online journal.  This month's journal features a discussion titled, "Patents and Public Choice."  The feature essay is authored by Eli Dourado, a research fellow at the Mercatus Center at George Mason University, and critically tackles the U.S. patent system.  There is one responding essay by Professor Zorina Khan (I recently highlighted one of her papers concerning patent trolls, here).  Forthcoming essays will be published by Professor John F. Duffy of the University of Virginia Law School and Professor Christina Mulligan of the Brooklyn Law School.  Mr. Dourado's essay is titled, "The True Story of How the Patent Bar Captured a Court and Shrank the Intellectual Commons."  The essay essentially argues that the U.S. Court of Appeals for the Federal Circuit, the supposedly specialist patent court in the U.S. with nationwide jurisdiction over patent appeals from U.S. district courts and jurisdiction over patent appeals from the United States Patent and Trademark Office, has been captured by the patent bar and has continuously expanded patent eligible subject matter to the detriment of innovation.  He points to software patents as a problem, including a discussion of the tragedy of the anticommons, as well as patent trolls.  Despite the U.S. Supreme Court's attempt to reign in software patents, he believes the Federal Circuit will continue to evade Supreme Court precedent (maybe true, but the composition of the court has been changing).  Here are his proposals for reform:

It would be better instead simply to abolish the Federal Circuit and return to the pre-1982 system, in which patents received no special treatment in appeals. This leaves open the possibility of circuit splits, which the creation of the Federal Circuit was designed to mitigate, but there are worse problems than circuit splits, and we now have them.

Another helpful reform would be for Congress to limit the scope of patentable subject matter via statute. New Zealand has done just that, declaring that software is “not an invention” to get around WTO obligations to respect intellectual property. Congress should do the same with respect to both software and business methods.

 . . . Current legislation in Congress addresses this class of [patent troll] problem[s] by mandating disclosures, shifting fees in the case of spurious lawsuits, and enabling a review of the patent’s validity before a trial commences.

What matters for prosperity is not just property rights in the abstract, but good property-defining institutions. Without reform, our patent system will continue to favor special interests and forestall economic growth.

I am not so convinced that returning to the uncertainty and splits of jurisdiction existing before the creation of the Federal Circuit and “races to the courthouse” is going to put us in a better position.  And, the party advocating for change and carrying the burden of proof may need to make a stronger case for reform given the relative success of the biotechnology and information technology industries in the U.S.   Professor Khan offers an incisive rebuttal, here.  This blog has featured posts challenging the assertion that patents in the information and technology communications space are inhibiting innovation, here,  [Although I do wonder about price.] and describing counter-arguments to proposals to reduce the Federal Circuit's influence over patent law, here.  We look forward to Professor Duffy and Professor Mulligan's essays.  [Hat Tip to Professor Dennis Crouch's Patently-Obvious Blog for a lead to the essay.]  

Monday, 23 December 2013

The Debate about the U.S. Court of Appeals for the Federal Circuit Continues

Donald Dunner, one of the most respected advocates before the U.S. Court of Appeals for the Federal Circuit and partner at Finnegan Henderson, recently addressed Chief Judge Diane Wood’s (7th Circuit) criticism of the Federal Circuit.  His comments were recently published by the Federal Circuit Bar Association, here.  Dunner challenges Chief Judge Wood's arguments and states:

First, the specialist/generalist alternatives that she posits are reminiscent of the pre-Federal Circuit dialogue and the Rifkind comments to which the Meador proposal was expressly directed. While the Federal Circuit reviews almost all the patent appeals from the district courts and several other tribunals, and its judges develop meaningful expertise in patent law, it is by no means a specialist court. As I earlier noted, only four of the current active Federal Circuit judges had pre-judicial patent backgrounds and that has been true since the inception of the Court in 1982. It is also likely to continue to be true since even the patent bar is comfortable with the notion of having a limit on patent-trained judges on the Court. And the inclusion of many non-patent areas of review within the Court’s jurisdiction further minimizes the prospect that its judges will develop tunnel vision and become Egyptian Priest-like, as Judge Rifkind feared, or that they will never explain what the rules are or why one side or the other prevailed, as Chief Judge Wood fears.

Second, Judge Wood’s repeated focus on the complexity of patent appeals and on the fact that those appeals are no more complex than the non-patent appeals handled regularly by judges in the regional circuit courts is a strawman. The Federal Circuit was not established because it was felt that a special court was needed to deal with complex legal issues. If that was anyone’s concern, it was not vocalized loudly, and indeed I personally do not recall hearing of it -- and I was heavily involved in the events leading to the Court’s formation. On the contrary, the essential arguments in favor of the Court had to do with the widespread attitudinal differences between the circuit courts of appeals’ approach to patent law and the attendant lack of uniformity and predictability in their decision-making, leading to rampant forum shopping and the negative impact that had on corporate R&D decisions.

Third, Judge Wood’s concern about the need for percolation is understandable but not a reason to eliminate the Federal Circuit’s exclusive jurisdiction over patent appeals. For the fact is that the current Federal Circuit model generates a significant amount of percolation, not only in the not infrequent dissents from panel decisions but from the meaningful number of en banc decisions which generate their own meaningful number of dissents. These dissents, coupled with regularly filed amicus briefs and the not infrequent requests by the Supreme Court to the Solicitor General to provide recommendations as to whether Federal Circuit decisions should be reviewed by the Supreme Court, provide the diversity of views which Judge Wood feels is so important, without forfeiting the uniformity and predictability which was essentially non-existent before the establishment of the Federal Circuit.

Fourth, Chief Judge Wood’s observation that the lines between patent law and other areas of IP law are blurring and that there’s no reason why patent law should be singled out for special treatment ignores the fact that these other areas of IP  law were not faced with the problem of huge attitudinal differences between the regional circuits that led to massive forum shopping and a lack of predictability and uniformity in decision-making. As to the quality of Federal Circuit decision-making, which has been called into question by Judge Wood, it compares favorably to the quality of decision-making by the regional appellate courts. And that includes the two subject areas on which Judge Wood focuses: claim construction and obviousness. The Federal Circuit’s decision to make claim construction the province of the bench rather than the jury was affirmed by the Supreme Court in Markman. The Federal Circuit’s decision to adopt no deference appellate review of district court claim construction was adopted en banc in Cybor but has been subjected to an intra-court percolation process leading to the recently heard but yet undecided Lighting Ballast en banc review, providing exactly the percolation process with which Judge Wood is so concerned. As to obviousness, one can debate whether the Federal Circuit’s TSM (Teaching, Suggestion, Motivation) test was responsible for what Judge Wood characterizes as a “low” standard of obviousness resulting in “the thickets of patent rights on marginal improvements”, but I would suggest that the amorphous, ill-defined Supreme Court KSR framework is hardly conducive to generating a uniform and predictable body of law, the raison d’etre for the formation of the Federal Circuit. And the frequent Supreme Court review of Federal Circuit decisions has been the subject of multiple and varying explanations by Supreme Court experts, most of which have not focused on the lack of quality of Federal Circuit decisionmaking.

Which leads me to Judge Wood’s specific proposal for dealing with her concerns. Simply stated, it is in my view unworkable. Before the establishment of the Federal Circuit, the regional appellate courts were all over the lot in their attitudes toward patents, and because litigants had significant choices as between district courts in one or another circuit, subject only to venue and jurisdictional constraints, there not only was extensive forum shopping but little uniformity or predictability in litigation outcomes. Yet that is exactly what would happen under Chief Judge Wood’s proposed regime. While she provides a choice to litigants as between the Federal Circuit or the regional circuit in which their claim was first filed, there is little doubt that that choice would be made based on the same considerations applicable to the pre-Federal Circuit regime, namely which court is most favorable to the particular interests of the litigants. And the problem is compounded by the fact that at the district court level, before the choice of the appellate court is made, the district court would not know whose appellate jurisprudence to follow, not only on substantive but on procedural issues. As demonstrated by the pre-Federal Circuit experience, differences in jurisprudential approaches were often outcome-determinative. Nor is the problem alleviated by the JPML option which she provides for multiple pending appeals pertaining to a single patent in different circuits. For the dysfunctional system that predated the Federal Circuit was not keyed to multiple pending appeals pertaining to a single patent in different circuits. On the contrary, it was keyed to the fact that a patentee or accused infringer of a single patent had meaningful options to forum shop to select a favorable jurisdiction, an option which would also be available under Chief Judge Wood’s proposal. In short, not only are the problems Chief Judge Wood identifies not meaningful but her proposal to take us back to what she calls the “bad old days” is unworkable... It is accordingly my view and that of many of my colleagues in the bar that the appellate experiment that began 31 years ago has been a hugely successful one, for the reasons I have spelled out, and that it is not in need of a major fix of the type contemplated by Chief Judge Wood...

I find Mr. Dunner’s arguments persuasive.  What do you think?  Here is additional coverage of Mr. Dunner’s comments by Corporate Counsel, as well as additional discussion by Mr. Dunner.