Tech center directors currently use their own appeal metrics for assessing examiners, but should use Anticipat data instead

The USPTO has a vested interest in knowing how well its patent examiners examine applications. It tracks production, efficiency and quality. Even though quality examination has always been tricky to measure, one metric comes pretty close: an examiner’s appeal track record. And while tech center directors have had access to this data, until recently this has been difficult to access. Here we explore the known gaps of how this metric is being used at the USPTO.

According to sources at the USPTO, directors–who oversee each technology center–have access to their Examiners’ appeal track records. The more an Examiner gets affirmed by the PTAB on appeal, the more reasonable the Examiner’s rejections, the theory goes. This means that directors can evaluate examiners based on how often an examiner gets affirmed.

The acceptable examiner appeal track record appears to depend on the director. An Examiner’s appeal track record with an affirmance rate significantly below the director’s average will attract attention. The USPTO as a whole has an affirmance rate at the PTAB that hovers around 60%. Different art unit groupings vary significantly from this global affirmance rate. Anything consistently lower than an affirmance rate average can put a question mark on the Examiner’s examination quality.

Even without knowing the specific contours of the acceptable affirmance rate at the USPTO, a look at the numbers can give an Examiner a general idea how well he/she is doing. This can help an Examiner proactively find out about these metrics before getting into trouble to guide his/her appeal forwarding strategy (Full disclosure: As a quality control metric, examiners do not appear to get punished in any way for being reversed).

While the USPTO’s appeal outcomes are available from other patent analytics services, they only use the USPTO’s outcomes that are based on how the overall decision was decided. See below.

appealrate

This decision-based outcome doesn’t communicate the issues that are causing examination problems (which issues are being reversed at the Board). By contrast, Anticipat provides a detailed breakdown of all an Examiner’s decisions. Examiners can thus easily pull up all of their appealed decisions and quickly see on which issues they were affirmed/reversed/affirmed-in-part.

On top of Examiner-specific data, Anticipat can identify rejections reversal rate outcomes across art units. For example, take obviousness rejections. Using Anticipat’s Analytics page for looking up over the past couple years in art unit 2172, in the computer/electrical arts, the pure reversal rate is about 18%. See blue sections of graph. This is lower than the tech center reversal rate of 27% and lower than the global USPTO reversal rate for this time. 2172

On the other hand, art unit 1631 in the biotech arts has a much higher reversal rate with a decision pool of about the same number. Specifically, art unit 1631 has a reversal rate of 43% for the past couple years. This is greater than its tech center reversal rate in 1600 of 26%.

1631

Finally, art unit 3721 in the mechanical art has an obviousness reversal rate much higher than both of the above examples. Specifically, 3721 is wholly reversed during the past couple years at 53%. This is higher than the tech center reversal rate of 44%, which is in turn higher than the global USPTO level. 3721

The granularity of appeal data can show what currently available data for appeals does not show: whether an Examiner is substantively doing a good job of examining applications. There are three reasons this is important for meaningful analysis of the metric.

First, as we’ve previously reported, the USPTO labels a decision as affirmed if only one rejection sticks to all pending claims. So the USPTO/director statistics and other patent analytics websites that provide this statistic of affirmance rate lacks the proper context. And without such context, the appeal outcome is an incomplete and even misleading metric.

Second, not all of the responsibility for low affirmance rates falls on the Examiner. For example, the two other conferees at the appeal conference can push applications to the Board that don’t have good rejections. But the Examiner-specific metric is a good starting point for any deviations to any norms. Anticipat allows for other Examiner lookups (including the SPE) to determine conferee track records.

A third reason for variance in Examiner appeal outcomes stems from the judges’ label of the outcome. While it is somewhat rare for a decision to have a newly designated rejection, it does happen. And as Bill Smith and Allen Sokal have recently pointed out in a Law360 article, decisions that have new designations are inconsistently labeled as affirmed or reversed. Sometimes the panel will reverse the Examiner’s improper rejection, but introduce a new rejection on that same ground with their own analysis. Other times the panel will affirm the Examiner’s improper rejection with its own analysis and be forced to designate the rejection as new. These small differences in patent judges’ preferences can impact an Examiner’s appeal profile.

Anticipat makes up for these shortcomings by providing greater context to outcomes and grounds of rejection. You can look at judge rulings in a particular tech center and identify patterns. For example, you can see whether panels tend to reverse or affirm when introducing new rejections.

Other valuable information, such as art unit and tech center data, can predictively guide an Examiner’s chances of taking an appeal to the Board. If a particular rejection consistently gets reversed on appeal, at the pre-appeal conference or appeal conference this knowledge can guide strategy to forward to Board based on the specific rejection at hand. Especially if the appeal consists of only a single issue.

With this increased granularity in appeal data there are only more questions to be answered. These specific questions currently have less clear answers. For example, to what extent are greatly disparate appeal outcomes the result of differing quality of examination? To what extent are Examiners across different tech centers evaluated based on appeal outcomes? Is there a point that an Examiner is considered to need improvement based on appeals outcomes? Could appeal outcomes–even if they include many reversals–affect the Examiner’s performance review? Likely not. But a better lens could prompt questions about disparate reversal rates across art units.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

%d bloggers like this: