New USPTO subject matter memo in light of Vanda Pharms. v. West-Ward

After a relatively long break of over a year from issuing subject matter eligibility memos after Amdocs, the USPTO looks like it’s back on track. This year has already seen the Berkheimer memo. Now its latest memo from Bob Bahr was issued yesterday June 7, 2018 in light of Vanda Pharms. v. West-Ward Pharms (Fed. Cir. Apt. 13, 2018). 

The memo includes guidance to examiners for examining diagnostics applications under Section 101. The memo is nothing revolutionary for examining under the Mayo/Alice framework as it closely follows the Vanda holding. But the memo does clarify that examiners should consider an “application” of a natural relationship as satisfying step 2A of the Alice test, without having to go to the “routine, conventional, and well-known” analysis of Step 2B. Specifically, the memo states that practically applying a natural relationship should be considered eligible–it is not necessary for such method of treatment claims to include non-routine or unconventional steps. 

The recent memos offer some hope that USPTO will continue to improve the predictability of applying Section 101 rejections.

Board panel citing Berkheimer to reverse judicial exception rejection to diagnostics claims: no evidence

Since the two weeks since we predicted that the PTAB would start to dramatically change its outcomes of rejections under Section 101, we have seen no such change. Since then, recap emails have mostly shown affirmances (only 7 reversals of 86 total Section 101 decisions = 8% reversal rate). But a decision in yesterday’s recap email shows precisely the kind of rejection analysis that is expected to become more mainstream at the PTAB.

Ex Parte Galloway et al (PTAB May 22, 2018) reversed the judicial exception rejection under Section 101 because of a lack of evidence. The panel, consisting of Donald E. Adams, Demetra J. Mills, and Ulrike W. Jenks, found that the Examiner had not provided evidence to support a prima facie case of patent ineligible subject matter.

The panel cited to Berkheimer in support of an apparent defective step 2 analysis: “The Examiner has not established with appropriate factual evidence that the claimed method uses conventional cell counting methods.”

As a stylistic aside, Section 101 rejections are typically presented in decisions toward the very top of the document. It is unclear how or why (it may stem from examiners or practitioners ordering the statutory rejections), but this practice has gone on in the Board’s decisions for several years. However, a recent trend is for the Board to analyze Section 101 after prior art rejections. Now it makes sense why because a lack of a good prior art rejection can make for a good support that step 2 of a Section 101 rejection is improper.

And that is precisely what happened here. The panel proceeded to support its assertion (that step 2 of the Alice/Mayo framework was defective) by referring to its obviousness reversal. In other words, the Board’s finding of non-obvious claims supported that the claim features were not simply conventional or known in the art.

Another interesting point to note about this case is that it reinforces the much higher reversal rates of Section 101 judicial exceptions. The Board’s practice, as in this case, appears to be helping the patent-eligibility of diagnostics inventions.

As the PTAB becomes more confident in using Berkheimer in their decisions, expect more of the same analysis as Ex Parte Galloway. The appeal backlog has far too many cases where the Examiners did not have the guidance of Berkheimer to establish the proper evidence for Step 2. Thus, the necessary analysis from the Board need only be short and crisp.

Tech center directors currently use their own appeal metrics for assessing examiners, but should use Anticipat data instead

The USPTO has a vested interest in knowing how well its patent examiners examine applications. It tracks production, efficiency and quality. Even though quality examination has always been tricky to measure, one metric comes pretty close: an examiner’s appeal track record. And while tech center directors have had access to this data, until recently this has been difficult to access. Here we explore the known gaps of how this metric is being used at the USPTO.

According to sources at the USPTO, directors–who oversee each technology center–have access to their Examiners’ appeal track records. The more an Examiner gets affirmed by the PTAB on appeal, the more reasonable the Examiner’s rejections, the theory goes. This means that directors can evaluate examiners based on how often an examiner gets affirmed.

The acceptable examiner appeal track record appears to depend on the director. An Examiner’s appeal track record with an affirmance rate significantly below the director’s average will attract attention. The USPTO as a whole has an affirmance rate at the PTAB that hovers around 60%. Different art unit groupings vary significantly from this global affirmance rate. Anything consistently lower than an affirmance rate average can put a question mark on the Examiner’s examination quality.

Even without knowing the specific contours of the acceptable affirmance rate at the USPTO, a look at the numbers can give an Examiner a general idea how well he/she is doing. This can help an Examiner proactively find out about these metrics before getting into trouble to guide his/her appeal forwarding strategy (Full disclosure: As a quality control metric, examiners do not appear to get punished in any way for being reversed).

While the USPTO’s appeal outcomes are available from other patent analytics services, they only use the USPTO’s outcomes that are based on how the overall decision was decided. See below.

appealrate

This decision-based outcome doesn’t communicate the issues that are causing examination problems (which issues are being reversed at the Board). By contrast, Anticipat provides a detailed breakdown of all an Examiner’s decisions. Examiners can thus easily pull up all of their appealed decisions and quickly see on which issues they were affirmed/reversed/affirmed-in-part.

On top of Examiner-specific data, Anticipat can identify rejections reversal rate outcomes across art units. For example, take obviousness rejections. Using Anticipat’s Analytics page for looking up over the past couple years in art unit 2172, in the computer/electrical arts, the pure reversal rate is about 18%. See blue sections of graph. This is lower than the tech center reversal rate of 27% and lower than the global USPTO reversal rate for this time. 2172

On the other hand, art unit 1631 in the biotech arts has a much higher reversal rate with a decision pool of about the same number. Specifically, art unit 1631 has a reversal rate of 43% for the past couple years. This is greater than its tech center reversal rate in 1600 of 26%.

1631

Finally, art unit 3721 in the mechanical art has an obviousness reversal rate much higher than both of the above examples. Specifically, 3721 is wholly reversed during the past couple years at 53%. This is higher than the tech center reversal rate of 44%, which is in turn higher than the global USPTO level. 3721

The granularity of appeal data can show what currently available data for appeals does not show: whether an Examiner is substantively doing a good job of examining applications. There are three reasons this is important for meaningful analysis of the metric.

First, as we’ve previously reported, the USPTO labels a decision as affirmed if only one rejection sticks to all pending claims. So the USPTO/director statistics and other patent analytics websites that provide this statistic of affirmance rate lacks the proper context. And without such context, the appeal outcome is an incomplete and even misleading metric.

Second, not all of the responsibility for low affirmance rates falls on the Examiner. For example, the two other conferees at the appeal conference can push applications to the Board that don’t have good rejections. But the Examiner-specific metric is a good starting point for any deviations to any norms. Anticipat allows for other Examiner lookups (including the SPE) to determine conferee track records.

A third reason for variance in Examiner appeal outcomes stems from the judges’ label of the outcome. While it is somewhat rare for a decision to have a newly designated rejection, it does happen. And as Bill Smith and Allen Sokal have recently pointed out in a Law360 article, decisions that have new designations are inconsistently labeled as affirmed or reversed. Sometimes the panel will reverse the Examiner’s improper rejection, but introduce a new rejection on that same ground with their own analysis. Other times the panel will affirm the Examiner’s improper rejection with its own analysis and be forced to designate the rejection as new. These small differences in patent judges’ preferences can impact an Examiner’s appeal profile.

Anticipat makes up for these shortcomings by providing greater context to outcomes and grounds of rejection. You can look at judge rulings in a particular tech center and identify patterns. For example, you can see whether panels tend to reverse or affirm when introducing new rejections.

Other valuable information, such as art unit and tech center data, can predictively guide an Examiner’s chances of taking an appeal to the Board. If a particular rejection consistently gets reversed on appeal, at the pre-appeal conference or appeal conference this knowledge can guide strategy to forward to Board based on the specific rejection at hand. Especially if the appeal consists of only a single issue.

With this increased granularity in appeal data there are only more questions to be answered. These specific questions currently have less clear answers. For example, to what extent are greatly disparate appeal outcomes the result of differing quality of examination? To what extent are Examiners across different tech centers evaluated based on appeal outcomes? Is there a point that an Examiner is considered to need improvement based on appeals outcomes? Could appeal outcomes–even if they include many reversals–affect the Examiner’s performance review? Likely not. But a better lens could prompt questions about disparate reversal rates across art units.

Recent critiques of the PTAB ex parte appeal process focus on Examiner involvement post-appeal

In the past month, two complementary but distinct criticisms of the ex parte appeal process have emerged. They deal with the way the Board treats appeals where the Examiner embellishes/modifies the rejections in between the last rejection on the record but before forwarding to the Board. These are serious criticisms that deserve serious attention. As people learn more about current Examiner practices, expect change at some level at the PTAB.

open Baltic sea at the sunset

The first criticism comes from Bill Smith and Allen Sokal on Law360 in a piece titled “A Way to Improve PTAB Ex Parte Appeals.” In the article, the authors decry the practice of examiners at the Examiner Answer stage. Examiners will copy and paste the written rejection from the appealed office action into the “statement of the rejection” section and state new facts and reasons in support of the rejection in the “response to arguments” section.

In essence, the Examiner gets to clean up the rejection before being heard by the Board. And for the most part, the Examiner can get away with introducing new analysis and new facts (new ground of rejection) without reopening prosecution. Then the Board decides the enhanced rejections as if it were part of the original rejection being appealed. According to the authors, this practice “injects unpredictability into the board’s decision since the appeal brief addressed the as-stated record rejection, not the rejection based on the expanded facts and reasons.”

The solution that the authors propose: limit its review of the merits to the facts and reasons in the as-stated rejections in the “statement of rejection” section of the examiner’s answer. If an Examiner raises anything else in the Examiner Answer, do not let the Board give it weight.

The second criticism is interrelated and comes from a recent lawsuit Odyssey Logistics & Tech. Corp. v. Andre Iancu, (E.D. Va. May 11, 2018). This lawsuit challenges the above-noted practice at a more fundamental level by hitting against the the Amended Ex Parte Appeal Rules enacted on January 23, 2012.

The complaint cites U.S. Patent Application No. 11/458,603 as an example of the appeal process running afoul. In this Examiner Answer, the Examiner, for the first time, apparently cited three patents as new evidence to be considered in support of his rejection. The Examiner’s Answer also allegedly included changes in the Examiner’s rationale for his rejections.

While this fact pattern is eerily similar to the fact pattern proposed by Sokal and Smith, here the complaint alleges that these new grounds of rejection would not be proper but for the Amended Ex Parte Appeals Rules and their retroactive application to the ‘603 application. According to the complaint, the Amended Rules included changes to 37 C.F.R. §41.35(a) that change the start of the Board’s jurisdiction. Modifications were also allegedly made to the provisions for new grounds of rejection in 37 C.F.R. §41.39, and new definitions were provided for the terms “Evidence” and “Record” in 37 C.F.R. §41.30.

According to the complaint, the Examiner, the official whose rejection is challenged in the appeal, will always have the jurisdiction necessary to change the rejection challenged on appeal or provide entirely new grounds of rejection, and may add arguments, dictionaries, or other documents to the record, even after the applicant has filed an appeal brief. 76 Fed. Reg. at 72276-78.

The complaint then proceeds to raise a very interesting point that is consistent with the points raised by Sokal and Smith.

In this event, the decision rendered by the PTAB is not a decision affirming or reversing the original rejection(s) challenged on appeal but instead is a decision on the new and different grounds of rejection. If the PTAB’s decision is on different grounds of rejection, the applicant’s right to obtain a patent term adjustment is frustrated since there is no reversal of the original rejection on the written record. See 35 U.S.C. §154(b)(1)(C). Therefore, the Amended Jurisdiction Rule interferes with applicant’s statutory right to appeal and to obtain a patent term adjustment if the appeal is of a rejection that should be reversed.

 

In reviewing tens of thousands of final decisions at the PTAB, we at Anticipat.com confirm the unpredictability in how rejections are decided (More on this in a forthcoming piece). Being a relatively obscure procedure in a relatively dry practice, this issue does not get too many headlines. But as a new administration seeks to foster greater predictability and strength to the patent system, watch for this issue to become more prominent with some sort of change in the horizon.

 

Expect the Berkheimer-driven patent-eligibility pendulum to swing at the PTAB

The past few months have seen huge developments in patent-eligibility at the USPTO. In three and a half years after Alice, the most effective way to argue against patent-eligibility for software applications was to focus on Step 1–that the claims are not directed to an abstract idea. But based on these recent developments, Step 2–that additional elements of the claims transform the judicial exception into something more–looks to be the more powerful way. The only problem is that the PTAB has not yet caught on. It will.

These huge developments have taken place in the form of Federal Circuit decisions deciding patent-eligibility favorably to the patentee, especially Berkheimer v. HP Inc., 881 F.3d 1360, 1369 (Fed. Cir. 2018). Such a clear articulation of the need for factual findings for Step 2 should usher in big change in how the Alice/Mayo framework is applied.

Then on top of the decisions came the revised USPTO Berkheimer memo last month. These guidelines emphasized that to establish under Step 2 that an additional element (or combination of elements) is well-understood, routine or conventional, the examiner must find and expressly support a rejection in writing with one of the four:

1. A citation to an express statement in the specification or to a statement made by an applicant during prosecution that demonstrates the well-understood, routine, conventional nature of the additional element(s).

2. A citation to one or more of the court decisions discussed in MPEP § 2106.05(d)(II) as noting the well-understood, routine, conventional nature of the additional element(s).

3. A citation to a publication that demonstrates the well-understood, routine, conventional nature of the additional element(s).

4. A statement that the examiner is taking official notice of the well-understood, routine, conventional nature of the additional element(s).

It should come as no small surprise to any practitioner that Examiners have not been including the above support in their Step 2 analyses for these additional elements of claims. This is no slight to the examining corps; it simply was never a USPTO requirement. So if the PTAB were faithful to the principles set forth in the guidelines, one would expect a dramatic turning of the tide.

While the PTAB is not bound to the USPTO examiner memos, it shouldn’t stray too far from them. Plus, it must comply with the Federal Circuit decisions, which are consistent with the guidelines. So one wouldn’t expect the PTAB to continue its practice of overwhelmingly affirming on Section 101. However, so far the PTAB has not significantly deviated from its previous course of mostly affirming judicial exception rejections.

Since April 19, 2018–the day that the Berkheimer memo was published–there have been 120 decisions that have decided judicial exceptions. Of these, only 13 have reversed, meaning a reversal rate of 11%. This 11% reversal rate is below the recently reported reverse rate for abstract ideas of 14%. It would appear that panels have not yet had the time to incorporate this new Step 2 framework into their decision-making. Or alternatively, they are preoccupied with the arguments raised by the appellant. Expect a greater number of request for rehearings on these.

Sooner or later, these PTAB judges should realize that many Section 101 rejections on appeal do not have the proper support for Step 2. This is not to say that these Examiners, on remand, could reformulate a proper rejection given another opportunity. While theoretically the judges could affirm the 101 rejections with a designation of new, the Board may not be well-equipped for to do so as this new requirement requires factual basis supporting Step 2. That is, the PTAB is a body that decides the propriety of pending rejections, not a body for searching and making such support findings. So expect a greater number of reversals to let the Examiners follow Berkheimer.

Movie Review: AlphaGo is fresh

This blog focuses mostly on patent law, patent prosecution (especially ex parte appeals), and related statistics. But Anticipat’s end goal is to better understand the entirety of patent prosecution through analyzing big patent data. So other technology topics are naturally very interesting. That is why today we present our first movie review for the recently debuted documentary “AlphaGo.”

The specific details of neural networks, machine learning and artificial intelligence are not for all audiences. In fact, these topics can be generally regarded as boring to most. The Netflix original “AlphaGo” is a documentary that turns this stereotype around with a thrilling man vs machine theme. In the process, it shows why deep learning is important and fascinating. It also touches on the human experience in a world that increasingly relies on computer algorithms.

As a side-effect, the film educates on the game of Go. The game of Go is to the China, Korea and Japan what the game of chess is to the West. Popularity aside, the two board games are quite different. While in chess different pieces with different possible routes seek to eventually pin a single opposing piece (the king), in Go players place their own colored-stones (white or black) on a grid to claim the most territory. Because of the larger grid, Go is astoundingly complex, having 10^170 legal board arrangements. For context, there are only 10^80 atoms in the known universe.

gp2

The film details one of the most pivotal matches between man and machine in a match between Lee Sedol, one of the best Go players in the world, and the algorithm AlphaGo. Partly because of the complexity, experts thought that a computer was decades away from beating the best human. But the application of specific deep learning networks, which were aided by a semi-supervised network that learned from the games of the brightest Go players, greatly accelerated that future moment.

Lee Sedol was very confident going into the match. Even though AlphaGo had previously beaten a champion in Europe champion, Fan Hui, the difference in skill between Fan (2nd dan) and Lee Sedol (9th dan) was stark. So leading up to the showdown with Lee Sedol, many wondered whether the match would even be close.

The first few games between Lee Sedol and AlphaGo established very convincingly how good this AlphaGo algorithm really was. One particular move, so-called move 37, was panned by critics as being a mistake by AlphaGo. Humans never would have considered such a move a good idea. But in the end, this move was described as “something beautiful” that helped win the game.

The documentary goes through the journey from DeepMind’s perspective. This is a team that has spent years developing the technology to train AlphaGo. And it shows times where the team understood areas of weakness in the program and really had no idea how it would fare against one of the world’s best. This side of vulnerability, not known to the public at the time, is especially interesting.

In a later game between the two, the film powerfully conveys the human spirit. Lee Sedol’s move 78, the “God move”, completely reversed the trajectory of the game. A moment of human triumph. It is understood that Lee Sedol was able to improve through this game. Speaking of Sedol, reporter Cade Metz remarked: “He improved through this machine. His humanness was expanded after playing this inanimate creation. The hope is that machine and in particular the technologies behind it can have the same effect with all of us.”

 

With such a story, questions of human obsolescence are bound to be raised. But an even better question gets answered of how humans will work going forward being benefited by computers. After all, seeing how a machine can invent new ways to tackle a problem can help push people down new and productive paths. So the feeling after watching this movie was entirely more optimistic.

Since filming, the AlphaGo algorithm went on to beat Ke Jie, the game’s best player, in Wuzhen, China three games to zero. But like Lee Sedol, Ke Jie studied the algorithm’s moves, looking for ideas. He proceeded to go on a 22-game winning streak against human opponents, impressive even for someone of his skill.

Also since filming, DeepMind has created an improved algorithm called AlphaGo Zero, which does not rely on the semi-supervised network that has learned from expert human Go players. Instead, this algorithm has learned the game of Go entirely by itself. And the results have been amazing. In 100 simulated games, the improved algorithm beat this version featured in the film 100 games to 0. Source.

The creators of DeepMind hope to apply the AlphaGo algorithm to a whole host of applications. Indeed, Demis Hassabis, one of the creators of AlphaGo, has said that anything that boils down to an intelligent search through an enormous number of possibilities could benefit from AlphaGo’s approach.

In one of the concluding scenes, David Silver, lead researcher on the AlphaGo team, comments: “There are so many application domains where creativity, in a different dimension to what humans could do, could be immensely valuable to us.”

You will very likely not be disappointed by checking out the film AlphaGo. Don’t expect a documentary about patent law algorithms to be as broadly interesting any time soon.

Update: These firms overturn abstract idea (Alice) rejections on appeal at PTAB

(Update: Kilpatrick was previously reported as having 4 reversals; in fact, it has 7)

A previous post showcased firms that successfully appeal abstract idea rejections at the PTAB. In that post, two firms stood out as clear leaders in overcoming the most difficult ground of rejection on appeal, Section 101 abstract idea. These firms were Schwegman Lundberg Woessner and Morgan Lewis. Five months later, we update the top firms to now add Kilpatrick Townsend and provide additional context of how many appeals it took to get there, with the aid of a recently introduced Customer Number lookup functionality.

Total Reversals for Abstract Idea Rejections (Numerator)

In an almost 2-year span post-Alice (July 25, 2016-April 30, 2018), there were 189 reversed abstract idea rejections on appeal at the PTAB. Of these, three firms–Schwegman, Morgan Lewis and Kilpatrick Townsend–were responsible for 11% of these reversals, with 7 reversals each. This is far ahead of the rest of firms. For context, the next closest firm had 3 abstract idea reversals on appeal. We discuss each of these three firms in more detail.

Total Abstract Idea Appeals (Denominator)

The first firm, Schwegman, took 42 abstract idea appeals to get its 7 reversals. This means that the reverse rate is 17%. This is a higher rate (more successful) than the average reverse rates for abstract ideas. From a comparison to other big patent firms, Schwegman pursues appeals for abstract idea rejections a lot more by a long shot. For comparison, during this window Knobbe Martens had 6 total abstract idea appeal decisions; and Fish & Richardson and Finnegan each had 19 total abstract idea appeal decisions.

But even with a more aggressive appeal strategy, Schwegman still maintains a higher-than-average reversal rate. And from the 204 total appealed decisions, almost a quarter have an abstract idea rejection. This suggests that a focus of the overall appeals includes in abstract idea rejections. Here is the firm’s information filters on the Anticipat Research page and the link to the Schwegman-filtered page here

schwegman

The second firm, Morgan Lewis, took far fewer appeals to get to its 7 reversals. It only appealed eight cases to get seven reversals. This translates into a reversal rate of 88% for abstract idea rejections. For a firm as big as Morgan Lewis, having only eight abstract idea appealed decisions is low compared to firms that are comparable in number of applications: Schwegman, Finnegan, Fish, Kilpatrick and Knobbe.

The overall number of appeals for Morgan Lewis during this time period is 52. This suggests that Morgan Lewis is conservative in pursuing ex parte appeals–not only for abstract idea rejections but in general. But when Morgan Lewis does proceed to appeal with a case (at least for Section 101 abstract idea rejections), it is very good at overturning such rejections. Again, the Research page and the Morgan Lewis-filtered Research page here.

morganlewis

The third firm, Kilpatrick, took 40 abstract idea rejections to get to its 7 reversals. This reversal rate of 18% is slightly above average, suggesting that Kilpatrick aggressively pursues appeals for this type of rejection. From 170 total appeals during that time period, it shows that abstract ideas make up a sizable part of the appealed rejections.  Kilpatrick Townsend-filtered Research page here.

kilpatrick

Conclusion

Each firm should be commended on the high number of abstract idea reversals. With such a difficult rejection, these firms are showing that one avenue of overcoming the rejection is by going straight to the Board for relief.

Context is extremely important for these statistics. Just because a particular firm has a higher reversal rate than another firm does not necessarily mean that the higher reversal rate firm is better. Perhaps the lower reversal rate firm is taking on more difficult cases. Perhaps the lower reversal rate firm had victories earlier in prosecution (like at the pre-appeal conference or appeal conference or even by responding to an Office Action) that are not counted in these statistics. But these statistics do show that when the Examiner conferees believe that an abstract idea rejection is proper, these firms know how to pursue a favorable outcome for their clients.

With a user account to Anticipat (sign up here for a free trial), you can lookup the above-discussed listing of reversed abstract idea decisions using the following links.