Patenting Machine Learning Tech at USPTO vs EPO

Artificial intelligence technology has been around for a long time, but has recently made advances prompting recognition as the transformative force that it truly is. While applicants have successfully patented artificial intelligence inventions for many years, the US has been more favorable than Europe for some types of AI. Here we focus on one area of difference between patent-eligibility of NLP inventions in the USPTO versus the EPO. We use board decisions for distinguishing the two jurisdictions. 

As some background, machine learning is the AI technique most frequently disclosed in patents, and is included in more than a third of all AI-related patent applications.

aiimage

Photo credit Aglaé Bassens et al., “Deep Learning Illustrated: A Visual, Interactive Guide to Artificial Intelligence” (August 5, 2019). 

Machine learning is an AI technique ever-growing in dominance. Deep learning and neural networks are fastest growing of the lot. This is at least partially represented in patent filings, where filings of machine learning patent applications have grown annually on average by 28% from 2013 to 2016. This is notably higher than the average annual growth across all new areas of technology, which was 10% during the same time. Within this category, deep learning, used for example in speech recognition, had a 175% average annual growth rate from 2013 to 2016. 

Among functional applications, computer vision is the most popular, and is mentioned in almost half of AI-related patents. The next hottest area in functional applications is natural language processing. Examples of NLP in industry include classifying documents; machine translation; search engine optimization; speech recognition; and chatbots. We focus here on patent-eligibility of NLP in this blog post. 

Source: WIPO Technology Trends 2019, “Artificial Intelligence,” at 14 and 31.

In the US, machine learning applications have generally fared well for patentability purposes. Even though many of these machine learning inventions are rooted in software, and presumably vulnerable to Alice-type eligibility rejections, allowance rates have been substantially higher than other software classes. For example, comparing class 706 (Artificial Intelligence: Data Processing) to class 705 (Financial, Business Practice, Management, or Cost/Price Determination: Data Processing), we see a big difference. 

allowanceai

https://developer.uspto.gov/visualization/allowance-rate-uspc-class

One reason why these allowance rates are much lower may depend on the art unit differences. Machine learning inventions typically get assigned to the 2121 or 2122 art units whereas business method inventions get assigned to the 3620s, 3680s and 3690s. The 3600 art units are well known for applying knee-jerk Section 101 patent-ineligible rejections whereas AI art units are not as preoccupied with Section 101. Often times, Examiners in these machine learning art units see the cutting edge technology of machine learning in these applications and generally quickly grant the patents for these inventions. 

But not all AI inventions are as easy to get allowed, especially depending on the jurisdiction. Take Europe, for example. The standard for patent-eligibility at the EPO is somewhat different than the US in that it requires a sufficiently technical nature (i.e., the claim must have a technical implementation or technical application). For image processing and speech recognition, this technical nature can be easily shown. But other types of machine learning tech, such as NLP, have not been so recognized as having a technical purpose.  

As pointed out in this Marks & Clerk blog post, there is a historical context to difficulties in patenting some NLP technologies. 

In T 52/85, the Board considered a system for automatically generating a list of expressions whose meaning was related to an input linguistic expression.  The Board held that the relationship between the input and output expressions was not of a technical nature, but was instead a matter of their “abstract linguistic information content”.  The Board consequently found that the claimed subject-matter was unpatentable.

In another relatively old decision, T 1177/97, the technology at issue related to machine translation.  The Board again found the claimed subject-matter to be unpatentable, stating “Features or aspects of the method which reflect only peculiarities of the field of linguistics, however, must be ignored in assessing inventive step.”  This statement is often quoted by examiners when applying the Comvik approach to inventions in the field of natural language processing. Although the Board in T 1177/97 also held that “information and methods related to linguistics may in principle assume technical character if they are used in a computer system and form part of a technical problem solution”, it is hard in practice to convince the EPO that a technical problem is solved by the linguistic aspects of an invention. 

https://www.marks-clerk.com/Home/Knowledge-News/Articles/Patenting-Artificial-Intelligence-at-the-European.aspx#.XauaEZNKjBI

The US has been more favorable about the patentability of NLP technology. For example, the PTAB has recently reversed patent-eligibility rejections in a large proportion of AI applications. Results will follow. For example, one PTAB panel recently overturned a NLP-related invention as passing the two-step Alice framework. In Ex parte BAUGHMAN et al., Appeal No. 2019-000665 (PTAB Sept. 25, 2019), the PTAB overturned an Examiner’s rejection for the claims being directed to an abstract idea. 

claim2.pngclaim1claim2

The Board, under step 2A prong 2, reasoned that the claim recites additional elements, which are outside the abstract idea, that include: “receiving, by the question answering system, a function call comprising an input question and a set of non-local context evidence in closure form.” The Board explained that the recited use of a “function call” and the use of “closure form” are particular (non-generic) software technology limitations. Specifically, the “function closure”-related software limitations recited in claim 1’s first step are integrated with the limitations that describe the abstract idea for generating answers to a question. 

The Board viewed the claim holistically by stating that “[t]aken as a whole, claim 1 recites a set of steps for a particular query- and hypothesis-based processing sequence and set of rules, executed by a QA system.” Then citing McRO, the Board held that this amounts to “us[ing] the limited rules in a process specifically designed to achieve an improved technological result in conventional industry practice,” i.e., to improve the technology of QA systems.” After coming to this determination, the Board found that the claim imposes meaningful limits on the application of the recited judicial exception for generating candidate answers to a question and thus are not directed to a patent-ineligible abstract idea. 

AI will continue to transform all sectors of industry and patentability standards across jurisdictions will continue to change. Patentability standards across jurisdictions should continue to evolve to balance the growing impact of AI on society. As it does so, it is important to anticipate prosecution strategy internationally with the best patent data. 

The PTAB sets Another Record for Reversing Abstract Idea Rejections

On April 30, 2019, a customer partnership meeting took place between the USPTO Technology Centers 3600 and 3700 and American Intellectual Property Law Association (AIPLA). The topics for this meeting varied widely (e.g., a training on functional claim drafting and a training on means-plus-function limitations in medical device claims). But for this meeting in these tech centers, the elephant in the room was Section 101, which Paul Kitch covered. Here we provide updated data on how the revised guidance continues to drive record-setting reversals for abstract idea rejections at the Board. We also show that these reversal rates are unevenly distributed throughout the tech centers, especially in tech centers 3600 and 3700.

March 2019 saw the PTAB break another record for total abstract idea rejections reversed. We previously reported that February broke the record with 62 total reversals, but we predicted that this record should not last long. This post turned out to be particularly prescient. In March 2019, the PTAB wholly reversed 77 decisions, exceeding the previous record-holding month by 15.

updated_march_abstract_idea
Besides the total number of reversals, the PTAB also maintained a high reversal rate. That is, about 33%, or one in three abstract idea rejections were reversed.

reversal_rate_updated_march_abstract_idea

We’re also seeing that Step 1 is becoming the main way in which PTAB panels overturn Examiner abstract idea rejections. In March, 61 decisions relied on Step 1 of the Alice/Mayo framework (Step 2A of the USPTO vernacular) while only 15 relied on Step 2 (Step 2B).

abstract_idea_steps_revised_march

How does these reversals and reversal rates relate specifically to tech centers 3600 and 3700? With tech center 3600 being the home of many business method art units, one would correctly assume lower reversal rates. Since December 2018 – March 2019, 134 out of 520 decisions were wholly reversed, yielding a reversal rate of 26%. Tech center 3700, home to mechanical and medical device tech, had 23 such reversals out of 47 total, yielding a reversal rate of 50%. The tech centers with the highest rates turned out to be tech center 2100 (27/43 = 63%), home to electrical and computer tech, followed by tech center 2400 (14/26= 54%), home also to electrical and computer tech. 

The abstract idea reversals should continue for several months as applications that have been on appeal wait their turn before the Board. But as applications further upstream (e.g., appeal conference, pre-appeal conference and before the Examiner) are increasingly getting allowed, expect the reversals to return to earth. Especially when Federal Circuit decisions such as Athena v. Mayo (Fed. Cir. Feb. 2019) are pushing back on how valid allowed claims (inspired by USPTO guidance and examples) really are in the real world, .

 

Expect the Berkheimer-driven patent-eligibility pendulum to swing at the PTAB

The past few months have seen huge developments in patent-eligibility at the USPTO. In three and a half years after Alice, the most effective way to argue against patent-eligibility for software applications was to focus on Step 1–that the claims are not directed to an abstract idea. But based on these recent developments, Step 2–that additional elements of the claims transform the judicial exception into something more–looks to be the more powerful way. The only problem is that the PTAB has not yet caught on. It will.

These huge developments have taken place in the form of Federal Circuit decisions deciding patent-eligibility favorably to the patentee, especially Berkheimer v. HP Inc., 881 F.3d 1360, 1369 (Fed. Cir. 2018). Such a clear articulation of the need for factual findings for Step 2 should usher in big change in how the Alice/Mayo framework is applied.

Then on top of the decisions came the revised USPTO Berkheimer memo last month. These guidelines emphasized that to establish under Step 2 that an additional element (or combination of elements) is well-understood, routine or conventional, the examiner must find and expressly support a rejection in writing with one of the four:

1. A citation to an express statement in the specification or to a statement made by an applicant during prosecution that demonstrates the well-understood, routine, conventional nature of the additional element(s).

2. A citation to one or more of the court decisions discussed in MPEP § 2106.05(d)(II) as noting the well-understood, routine, conventional nature of the additional element(s).

3. A citation to a publication that demonstrates the well-understood, routine, conventional nature of the additional element(s).

4. A statement that the examiner is taking official notice of the well-understood, routine, conventional nature of the additional element(s).

It should come as no small surprise to any practitioner that Examiners have not been including the above support in their Step 2 analyses for these additional elements of claims. This is no slight to the examining corps; it simply was never a USPTO requirement. So if the PTAB were faithful to the principles set forth in the guidelines, one would expect a dramatic turning of the tide.

While the PTAB is not bound to the USPTO examiner memos, it shouldn’t stray too far from them. Plus, it must comply with the Federal Circuit decisions, which are consistent with the guidelines. So one wouldn’t expect the PTAB to continue its practice of overwhelmingly affirming on Section 101. However, so far the PTAB has not significantly deviated from its previous course of mostly affirming judicial exception rejections.

Since April 19, 2018–the day that the Berkheimer memo was published–there have been 120 decisions that have decided judicial exceptions. Of these, only 13 have reversed, meaning a reversal rate of 11%. This 11% reversal rate is below the recently reported reverse rate for abstract ideas of 14%. It would appear that panels have not yet had the time to incorporate this new Step 2 framework into their decision-making. Or alternatively, they are preoccupied with the arguments raised by the appellant. Expect a greater number of request for rehearings on these.

Sooner or later, these PTAB judges should realize that many Section 101 rejections on appeal do not have the proper support for Step 2. This is not to say that these Examiners, on remand, could reformulate a proper rejection given another opportunity. While theoretically the judges could affirm the 101 rejections with a designation of new, the Board may not be well-equipped for to do so as this new requirement requires factual basis supporting Step 2. That is, the PTAB is a body that decides the propriety of pending rejections, not a body for searching and making such support findings. So expect a greater number of reversals to let the Examiners follow Berkheimer.

Presentation Recap on Abstract Idea Developments at the PTAB

Trent Ostler did a deep dive on abstract idea developments at the PTAB yesterday at the AIPLA Joint Committee Hot Topics Presentation (Patent Law Committee and ECLC). He used data from Anticipat.com for all his results. In case you missed it, here it is:

I’m going to talk about Section 101 developments at the PTAB of ex parte appeal decisions. As many are aware, ex parte appeal decisions involve those applications that have been twice rejected, appealed, and gone all the way to a written decision by a panel of judges at the PTAB.

pic1

Now, the umbrella of Section 101 nonstatutory subject matter includes a variety of rejections. But as Theresa indicated, the most activity is in the abstract idea space. So here, we’re going to exclusively focus on developments of abstract ideas at the Board.

Before we get in too deep, I’m going to lay a foundation for an important point on appeals.

pic2

Typically when we think about outcomes for these decisions, we think of the following pie chart put out by the PTO. This pie chart shows that most of the time, the Examiner gets upheld (called here as affirmed at 56%). The pie chart indicates that a much smaller percentage of the time the Examiner gets overturned (called reversed at 29%), and the remaining chunk is a mix between the two (called affirmed-in-part).

A big problem with this chart is that this treats every appealed application the same. In reality, some grounds of rejection are much more likely to be overturned by the PTAB than others, as has been shown by the ex parte PTAB subcommittee of AIPLA.

pic3

Here is an illustration of rates of rejection for the past year and a half taken from Anticipat.com, a relatively new website that keeps track of all grounds of rejection and outcomes for ex parte appeals. Plus, it offers free academic use and steeply discounted Examiner use. The graphs show reversal rates with the blue being a rejection wholly reversed and the orange being reversed in part.

Some of these results are surprising. Section 102 anticipation rejections and Section 112 rejections are entirely reversed about 50% of the time. We’ve found these rates to be remarkably consistent even with multiple grounds of rejection being decided.

101 rejections are reversed about 21%. If we drill down into abstract ideas, the rate is even lower: about 17%. This is one of the lowest reversed ground of rejection. But at the same time, this also goes to show that it is not a completely futile endeavor. Almost a fifth of the time the Examiner’s abstract idea rejection gets overturned.

pic4

Within the past year and a half, abstract ideas rejections have been reversed throughout each tech center. Some tech centers have higher reversal rates than others. The rate is especially low in the business method art units. In the biotech tech center 1600, the rate is higher.

pic5

Over the course of the last year and a half, there have been about 100 reversed abstract idea rejections. Some time periods are reversed at a higher rate than others. This may be due to the board perhaps correcting an overreaction of abstract ideas directly after Alice. It may also be as a result of Federal Circuit decisions that are either favorable to patent-eligibility or unfavorable depending on the time.

These PTAB decisions follow three different general arguments for reversals. Each of these arguments can stand alone in reversing a rejection and can be used in combination.

  • Prima Facie Case (17 decisions) – The Examiner did not provide sufficient articulation
  • Step 1 (76 decisions) – Not “Directed To” Abstract Idea
  • Step 2 (44 decisions) – Claim Elements Alone or in Combination Transform Abstract Idea into Something More

We’ll briefly step through what these different abstract idea arguments look like in practice.

First, the prima facie case. 

pic7

(see https://anticipat.com/research?id=86526) Many of us practitioners, especially who work in the computer arts, have seen this: a rejection that doesn’t meet the minimum threshold required for a prima facie case. This decision shows the Board overturn the Examiner’s rejection for not making that case. Can’t be conclusory, has to analogize to a case with an abstract idea, has to explain why it’s not more than the asserted abstract idea. If the Examiner doesn’t do this, reversed.

Next, step 1.

pic8(https://anticipat.com/research?id=92479) This is the most frequent category for overturning abstract ideas. This is in part due to recent decisions that hold that technological improvements are relevant in step 1, even if the guidelines suggest otherwise. Here, the Board breaks down the Examiner’s asserted analogous abstract idea. The Board then recharacterizes the claimed invention as an improved device rather than an abstract idea. Importantly, the Board supports its conclusion using the specification of the application including the background. 

Finally, step 2. 

pic9(https://anticipat.com/research?id=91985) This is often times used, as is shown in the following example, in conjunction with step 1. Here, the Board deconstructs the difference or delta between the Examiner’s asserted abstract idea and what is actually in the claims. As is often the case, there’s more to the claim than how the Examiner characterizes them. Here the Board recognizes that the examiner failed to show that the claim elements do not amount to significantly more or add meaningful limitations. This step here can bleed somewhat in to the prima facie analysis. The Board can either disagree with the Examiner’s assertion or rule that the examiner’s assertion does not provide the necessary analysis.

Anticipat has a lookup tool where you can put in the specific argument (e.g., step 1, step 2, prima facie case) and you can retrieve all the relevant decisions, mapped to your particular art unit or Examiner. Having relevant decisions can guide your strategy in responding to Office Actions or in your appeal brief strategy for including the most successful arguments. 

Next, which are the best legal authority for each type of argument? Here we discuss what judges rely on in reversing the various steps under the abstract idea rejection. These are not just legal authorities that appear in the decision somewhere, but rather these are cases where the PTAB either explicitly analogized to these cases or cited the authority in deriving its holding. Anticipat Analytics allows for looking up the legal authority for each type of argument used.

For step 1, the clear leading cases cited when reversing are DDR Holdings and Enfish.

pic10

For step 2, the clear leading legal authority used in reversing rejections is Bascom.

pic11

The PTAB decisions show similar volatility as the courts in deciding abstract idea rejections. Here are some of the more contentious areas that are being decided both in reversing and in affirming.

First, what is a technological improvement? To what extent does the Examiner need to provide evidence of assertions of routine/conventional activity? How close does the Examiner need to analogize to a similar case for showing the claim “directed” to an abstract idea? To what extent must the Examiner look to the specification to interpret the abstractness of the claims? These questions do not have clear answers, but the PTAB at least has more answers than the Federal Circuit – just out of sheer volume of decisions.

Another consideration is that when considering appealing an application, even if the application currently does not have an abstract idea rejection, the judges may introduce one sua sponte.

It is relatively rare, but it does happen.

pic14

It can happen in one of three ways:

First, the panel formally introduces a previously unapplied abstract idea rejection. Second, the panel can strengthen an existing rejection with additional analysis and designating the rejection as new. Third, the Board sometimes suggests that the Examiner consider 101 without issuing a formal, new 101 rejection. Keep this in mind as you consider an appeal. You don’t want to open up a can of worms if you don’t have to.

Conclusion

In sum, you can see the reversal rates for 101 rejection directly related to your area of interest. You can see the arguments used in overcoming other rejections (including the legal authorities relied upon) and incorporate it into your own practice.

Too Simplistic: How the USPTO measures outcomes for ex parte PTAB appeals

A patent applicant usually decides to appeal a rejection as a last resort because of the substantial cost and time. When the applicant decides to overlook the substantial cost and time, it is because she believes independent judges will objectively overturn at least one of (but hopefully all) the rejections. These administrative patent judges (APJs) have experience, technical backgrounds, and are independent from Examiners. So if this body of judges were to sustain Examiners’ rejections most of the time, you would think that the Examiners are doing a good job of examining applications. And if the Examiners are doing well, it would appear that the U.S. Patent & Trademark Office (USPTO) is doing well. But it’s not.

Currently, the USPTO measures decision outcomes of ex parte appeals in three different ways: affirmed, affirmed-in-part, or reversed. This is highlighted by the USPTO’s recently released statistics on outcomes of ex parte appeals for FY2017. These stats show that the Patent Trial and Appeal Board (PTAB) very frequently upholds Examiners on appeal, with a 55% affirmance rate. This rate is consistent with previous years’ affirmance rates. These affirmed rates suggest a job “well done” by the USPTO. However, the way the USPTO counts affirmances yields counterintuitive and misleading results, especially with cases involving multiple grounds of rejection. Indeed for accountability purposes, this way of measuring appeals cloaks the USPTO’s Examining Corps failures.

FY17d

 

The USPTO currently measures ex parte appeals in relation to the total appealed claims—not the total pending rejections. If all of the appealed claims stand rejected under at least one ground of rejection, the decision is affirmed. Thus, only one ground of rejection affirmed for the appealed claims is required for a decision to be marked affirmed by the USPTO. Under this measuring system, assuming an Examiner rejects all claims under five different grounds, the decision is marked affirmed even if the Board reverses four of the five grounds.

This way that the USPTO measures appeals undermines use of ex parte appeals data for accurate accountability of the USPTO in two ways. First, the data do not show which grounds of rejections get overturned on appeal. As we have previously pointed out in this blog, several of the individual grounds are currently being completely reversed at rates over 50%. This means that for certain legal grounds of rejection, an Examiner’s rejections are bad over half the time. This is obviously not very favorable to the USPTO. But when bad rejections get overlooked because of one affirmed rejection, any accountability for an Examiner’s bad rejections is lost.

This way that the USPTO measures affirmances of ex parte appeals skews how often Examiners are truly upheld because not all grounds of rejection are equally critical for the application to move forward. In fact, some rejections require very minor claim amendments or Terminal Disclaimers that insignificantly affect the patent protection. So any system of measuring outcomes should take into consideration this actual effect of the specific grounds of rejection to provide true accountability. However, it becomes difficult to measure accountability of examiners, art units, tech centers, etc., when trivial affirmances by the Board mask substantive affirmances.

For example, a recent decision, Ex parte Lee et al., had two grounds of rejection on appeal: obviousness and double patenting for the same claims. The Board reversed the rejection on obviousness, but because the appellant did not argue the double patenting rejection, the Board summarily affirmed the double patenting rejection. The appellant did not fight the double patenting rejection because of an intention to file a Terminal Disclaimer, which would have rendered the rejection moot. However, because of the non-substantive affirmance of double patenting, the entire decision is marked as affirmed.   This, when all three Examiners involved with this case got, in the Board’s view, the substantive legal issue of obviousness completely wrong.

The outcome for Ex parte Lee is far from what one might expect. One would expect if the appellant won on the only issue it actually argued, then that outcome would be marked as “reversed.” Even being generous, you could permit an outcome “affirmed-in-part,” considering the Examiner did get affirmed on one issue (even if the affirmed ground was not on the merits). But you would certainly never consider this decision as affirmed—the application is going to issue as a patent. However, the bizarre outcome of “affirmed” is exactly how this decision is counted and reported to the public by the USPTO.

The second way that the USPTO’s measuring system is deficient is by not showing how many of the rejections get overturned. If the USPTO only needs one of the grounds of rejection to qualify an appealed decision as affirmed, the tracking system effectively ignores the outcome on appeal of all remaining rejections. This greatly skews the data in favor of counting affirmances of appeals. In fact, because most decisions involve more than one ground of rejection, including accurate one-ground decisions with inaccurate multiple-ground decisions make the USPTO’s affirmance statistics almost meaningless. It certainly does not accurately reflect the accountability of an Examiner’s rejections.

This is also true for the other senior Examiners also involved the appeal process. Before every appealed case, an appeal conference takes place consisting of the Examiner, the Examiner’s supervisor, and another primary examiner. For the appeal to proceed to the judge panel, this appeal conference must sign off that they agree that the Examiner’s current rejections would likely be affirmed by the APJs on the Board. In other words, before the judges even hear the case, the appeal conference has the authority not to take the case to the panel of judges. The appeal conferees can instead disagree with the pending rejections by issuing a Notice of Allowance or by reopening prosecution with a new Office Action. So for a decision that makes it to the judge panel, one might assume that for any ground of rejection issued by the Examiner, the supervisor and another primary examiner agree fully with all the rejections as they stand.

But with the current way of measuring appeal decisions involving multiple issues, if only one of those grounds of rejection sticks, the Examiner and this appeal board did a “good job”—they were affirmed! Thus, the appeal conference examiners really only care about one of the rejections being “good enough” for the appeal to proceed. Because of the variety of rejections, the appeal conference examiners can be sure they pick cases that they are sure have a “good enough” rejection which will not adversely affecting their reputation.

The USPTO’s practice of measuring outcomes would not necessarily be a skewed way of measuring were there only one ground of rejection per decision. Nor would this practice be skewed if decisions with multiple grounds of rejection were properly designated as “affirmed-in-part” when the decision reverses on one ground and affirms for another. However, since most decisions involve multiple issues, the outcomes data counts one ground as being a full affirmance, overshadowing the remaining grounds.

From the way that the USPTO currently advertises their appeals statistics, the agency seems proud of its affirmance rates. This, because the USPTO’s way of measuring affirmances happens to be favorable to the USPTO.  But if you accept the USPTO’s affirmance rates, you get counterintuitive and misleading results, especially with cases involving multiple grounds of rejection. The current measuring system of the USPTO lacks the necessary granularity, and the public only gets to see a roll up of all the flawed affirmance data.

Since certain rejections are reversed more often than others, and since there is wide variability across tech centers and art units, having additional granularity on appeals is critical to drawing meaning from the publicly available data. Without a more comprehensive way of measuring outcomes based on what substantively happened in each appeal, the USPTO Examining Corps is not truly held accountable.

A more accurate way of measuring appeals is keeping track of the outcome for each ground of rejection. This is exactly what Anticipat Research Database does. An important part of Anticipat’s mission is extracting value from appeals decisions by devising an intuitive way of processing decisions.

The Anticipat research database keeps track of all the rejection outcomes for each ground of rejection in ex parte appeals, for greater precision. You can see which specific rejections are being reversed across various art units, tech centers, etc. This more accurate data may not show up as a neat pie chart, but having the data is powerfully more useful for accurately holding the USPTO accountable. It is also helpful for setting expectations for patent prosecution strategies and evaluating the strength of rejections.  With the data, you can even see, for the very first time ever, how often the Board is agreeing with the Examiner’s supervisor.

Click here for a free seven day trial