Tech center directors currently use their own appeal metrics for assessing examiners, but should use Anticipat data instead

The USPTO has a vested interest in knowing how well its patent examiners examine applications. It tracks production, efficiency and quality. Even though quality examination has always been tricky to measure, one metric comes pretty close: an examiner’s appeal track record. And while tech center directors have had access to this data, until recently this has been difficult to access. Here we explore the known gaps of how this metric is being used at the USPTO.

According to sources at the USPTO, directors–who oversee each technology center–have access to their Examiners’ appeal track records. The more an Examiner gets affirmed by the PTAB on appeal, the more reasonable the Examiner’s rejections, the theory goes. This means that directors can evaluate examiners based on how often an examiner gets affirmed.

The acceptable examiner appeal track record appears to depend on the director. An Examiner’s appeal track record with an affirmance rate significantly below the director’s average will attract attention. The USPTO as a whole has an affirmance rate at the PTAB that hovers around 60%. Different art unit groupings vary significantly from this global affirmance rate. Anything consistently lower than an affirmance rate average can put a question mark on the Examiner’s examination quality.

Even without knowing the specific contours of the acceptable affirmance rate at the USPTO, a look at the numbers can give an Examiner a general idea how well he/she is doing. This can help an Examiner proactively find out about these metrics before getting into trouble to guide his/her appeal forwarding strategy (Full disclosure: As a quality control metric, examiners do not appear to get punished in any way for being reversed).

While the USPTO’s appeal outcomes are available from other patent analytics services, they only use the USPTO’s outcomes that are based on how the overall decision was decided. See below.

appealrate

This decision-based outcome doesn’t communicate the issues that are causing examination problems (which issues are being reversed at the Board). By contrast, Anticipat provides a detailed breakdown of all an Examiner’s decisions. Examiners can thus easily pull up all of their appealed decisions and quickly see on which issues they were affirmed/reversed/affirmed-in-part.

On top of Examiner-specific data, Anticipat can identify rejections reversal rate outcomes across art units. For example, take obviousness rejections. Using Anticipat’s Analytics page for looking up over the past couple years in art unit 2172, in the computer/electrical arts, the pure reversal rate is about 18%. See blue sections of graph. This is lower than the tech center reversal rate of 27% and lower than the global USPTO reversal rate for this time. 2172

On the other hand, art unit 1631 in the biotech arts has a much higher reversal rate with a decision pool of about the same number. Specifically, art unit 1631 has a reversal rate of 43% for the past couple years. This is greater than its tech center reversal rate in 1600 of 26%.

1631

Finally, art unit 3721 in the mechanical art has an obviousness reversal rate much higher than both of the above examples. Specifically, 3721 is wholly reversed during the past couple years at 53%. This is higher than the tech center reversal rate of 44%, which is in turn higher than the global USPTO level. 3721

The granularity of appeal data can show what currently available data for appeals does not show: whether an Examiner is substantively doing a good job of examining applications. There are three reasons this is important for meaningful analysis of the metric.

First, as we’ve previously reported, the USPTO labels a decision as affirmed if only one rejection sticks to all pending claims. So the USPTO/director statistics and other patent analytics websites that provide this statistic of affirmance rate lacks the proper context. And without such context, the appeal outcome is an incomplete and even misleading metric.

Second, not all of the responsibility for low affirmance rates falls on the Examiner. For example, the two other conferees at the appeal conference can push applications to the Board that don’t have good rejections. But the Examiner-specific metric is a good starting point for any deviations to any norms. Anticipat allows for other Examiner lookups (including the SPE) to determine conferee track records.

A third reason for variance in Examiner appeal outcomes stems from the judges’ label of the outcome. While it is somewhat rare for a decision to have a newly designated rejection, it does happen. And as Bill Smith and Allen Sokal have recently pointed out in a Law360 article, decisions that have new designations are inconsistently labeled as affirmed or reversed. Sometimes the panel will reverse the Examiner’s improper rejection, but introduce a new rejection on that same ground with their own analysis. Other times the panel will affirm the Examiner’s improper rejection with its own analysis and be forced to designate the rejection as new. These small differences in patent judges’ preferences can impact an Examiner’s appeal profile.

Anticipat makes up for these shortcomings by providing greater context to outcomes and grounds of rejection. You can look at judge rulings in a particular tech center and identify patterns. For example, you can see whether panels tend to reverse or affirm when introducing new rejections.

Other valuable information, such as art unit and tech center data, can predictively guide an Examiner’s chances of taking an appeal to the Board. If a particular rejection consistently gets reversed on appeal, at the pre-appeal conference or appeal conference this knowledge can guide strategy to forward to Board based on the specific rejection at hand. Especially if the appeal consists of only a single issue.

With this increased granularity in appeal data there are only more questions to be answered. These specific questions currently have less clear answers. For example, to what extent are greatly disparate appeal outcomes the result of differing quality of examination? To what extent are Examiners across different tech centers evaluated based on appeal outcomes? Is there a point that an Examiner is considered to need improvement based on appeals outcomes? Could appeal outcomes–even if they include many reversals–affect the Examiner’s performance review? Likely not. But a better lens could prompt questions about disparate reversal rates across art units.

Rare split panel at PTAB reverses abstract idea rejection

 

Rarely do the three-judge panels at the PTAB offer differing opinions in ex parte appeal decisions. It’s not necessarily that these judges all agree with each other all the time. Instead, it’s because the USPTO production quota system does not reward judges for separate concurrences or dissents. So any time that a judge decides to spend in writing a separate opinion is in essence off-the-clock work. But this does not deter some judges from branching out from the panel, as shown in a recent case that reversed an abstract idea rejection: Ex Parte Boucher et al, Appeal No. 2017-003484 (PTAB Oct. 31, 2017).

In Ex Parte Boucher, the majority reversal of the Examiner’s rejection under Section 101, authored by Joseph L. Dixon and joined by Larry J. Hume, was brief. It held that the Examiner had provided insufficient factual findings and analysis on patent eligibility. The Examiner’s asserted abstract idea “manipulating data for the purpose of fault detection” oversimplified the claimed invention, according to the majority.

The majority further agreed with the appellant’s arguments that the claims are directed to new and useful improvements for detecting or diagnosing faults of items or of functions implemented by the items of an aircraft. Thus, the claims were not solely directed to an abstract idea.

Judge Joyce Craig disagreed with the majority on Section 101. The dissent would have characterized the claim as existing information in a database, analogizing the claims to Intellectual Ventures I LLC v. Erie Indemnity Co., 850 F.3d 1315, 1327 (Fed. Cir. 2017). The dissent would have characterized a remaining part of the claims as using a mathematical algorithm to manipulate existing information to generate additional information, citing to Digitech Image Techs., LLC v. Elecs. for Imaging, Inc., 758 F.3d 1344, 1351 (Fed. Cir. 2014). Thus, the dissent would have concluded that the claims are directed to an abstract idea under the Step 1 analysis of Mayo/Alice.

Under step 2, the dissent looked at the claim elements taken individually and saw nothing more than “routine computer functions and amount to no more than the performance of well-understood, routine, and conventional activities in known to the industry.” Thus, the dissent would have agreed with the Examiner and sustained the rejection.

The Federal Circuit and district courts are not the only judicial bodies that are at odds with each other with regard to applying a consistent and cohesive framework for assessing the two-step analysis of patent-eligibility. It can thus be useful to have additional knowledge at to guide your chances for appeal.

With Anticipat Research, you can see which judges are deciding cases in your tech center to give you a better sense of what your chances on appeal are. For example, if you are appealing a Section 101 rejection, will you get a Judge Craig type of panel that tends to agree with the Examiner’s analysis, or will you get a Judge Dixon type of panel that sides with the appellant’s analysis. See below interface.

filters  Try Anticipat Research today to see which judges are on the reversed panel decisions, specific to a tech center of interest. Then you can search for how often these judges appear both in panels and as authoring judges in your tech center.