Tech center directors currently use their own appeal metrics for assessing examiners, but should use Anticipat data instead

The USPTO has a vested interest in knowing how well its patent examiners examine applications. It tracks production, efficiency and quality. Even though quality examination has always been tricky to measure, one metric comes pretty close: an examiner’s appeal track record. And while tech center directors have had access to this data, until recently this has been difficult to access. Here we explore the known gaps of how this metric is being used at the USPTO.

According to sources at the USPTO, directors–who oversee each technology center–have access to their Examiners’ appeal track records. The more an Examiner gets affirmed by the PTAB on appeal, the more reasonable the Examiner’s rejections, the theory goes. This means that directors can evaluate examiners based on how often an examiner gets affirmed.

The acceptable examiner appeal track record appears to depend on the director. An Examiner’s appeal track record with an affirmance rate significantly below the director’s average will attract attention. The USPTO as a whole has an affirmance rate at the PTAB that hovers around 60%. Different art unit groupings vary significantly from this global affirmance rate. Anything consistently lower than an affirmance rate average can put a question mark on the Examiner’s examination quality.

Even without knowing the specific contours of the acceptable affirmance rate at the USPTO, a look at the numbers can give an Examiner a general idea how well he/she is doing. This can help an Examiner proactively find out about these metrics before getting into trouble to guide his/her appeal forwarding strategy (Full disclosure: As a quality control metric, examiners do not appear to get punished in any way for being reversed).

While the USPTO’s appeal outcomes are available from other patent analytics services, they only use the USPTO’s outcomes that are based on how the overall decision was decided. See below.

appealrate

This decision-based outcome doesn’t communicate the issues that are causing examination problems (which issues are being reversed at the Board). By contrast, Anticipat provides a detailed breakdown of all an Examiner’s decisions. Examiners can thus easily pull up all of their appealed decisions and quickly see on which issues they were affirmed/reversed/affirmed-in-part.

On top of Examiner-specific data, Anticipat can identify rejections reversal rate outcomes across art units. For example, take obviousness rejections. Using Anticipat’s Analytics page for looking up over the past couple years in art unit 2172, in the computer/electrical arts, the pure reversal rate is about 18%. See blue sections of graph. This is lower than the tech center reversal rate of 27% and lower than the global USPTO reversal rate for this time. 2172

On the other hand, art unit 1631 in the biotech arts has a much higher reversal rate with a decision pool of about the same number. Specifically, art unit 1631 has a reversal rate of 43% for the past couple years. This is greater than its tech center reversal rate in 1600 of 26%.

1631

Finally, art unit 3721 in the mechanical art has an obviousness reversal rate much higher than both of the above examples. Specifically, 3721 is wholly reversed during the past couple years at 53%. This is higher than the tech center reversal rate of 44%, which is in turn higher than the global USPTO level. 3721

The granularity of appeal data can show what currently available data for appeals does not show: whether an Examiner is substantively doing a good job of examining applications. There are three reasons this is important for meaningful analysis of the metric.

First, as we’ve previously reported, the USPTO labels a decision as affirmed if only one rejection sticks to all pending claims. So the USPTO/director statistics and other patent analytics websites that provide this statistic of affirmance rate lacks the proper context. And without such context, the appeal outcome is an incomplete and even misleading metric.

Second, not all of the responsibility for low affirmance rates falls on the Examiner. For example, the two other conferees at the appeal conference can push applications to the Board that don’t have good rejections. But the Examiner-specific metric is a good starting point for any deviations to any norms. Anticipat allows for other Examiner lookups (including the SPE) to determine conferee track records.

A third reason for variance in Examiner appeal outcomes stems from the judges’ label of the outcome. While it is somewhat rare for a decision to have a newly designated rejection, it does happen. And as Bill Smith and Allen Sokal have recently pointed out in a Law360 article, decisions that have new designations are inconsistently labeled as affirmed or reversed. Sometimes the panel will reverse the Examiner’s improper rejection, but introduce a new rejection on that same ground with their own analysis. Other times the panel will affirm the Examiner’s improper rejection with its own analysis and be forced to designate the rejection as new. These small differences in patent judges’ preferences can impact an Examiner’s appeal profile.

Anticipat makes up for these shortcomings by providing greater context to outcomes and grounds of rejection. You can look at judge rulings in a particular tech center and identify patterns. For example, you can see whether panels tend to reverse or affirm when introducing new rejections.

Other valuable information, such as art unit and tech center data, can predictively guide an Examiner’s chances of taking an appeal to the Board. If a particular rejection consistently gets reversed on appeal, at the pre-appeal conference or appeal conference this knowledge can guide strategy to forward to Board based on the specific rejection at hand. Especially if the appeal consists of only a single issue.

With this increased granularity in appeal data there are only more questions to be answered. These specific questions currently have less clear answers. For example, to what extent are greatly disparate appeal outcomes the result of differing quality of examination? To what extent are Examiners across different tech centers evaluated based on appeal outcomes? Is there a point that an Examiner is considered to need improvement based on appeals outcomes? Could appeal outcomes–even if they include many reversals–affect the Examiner’s performance review? Likely not. But a better lens could prompt questions about disparate reversal rates across art units.

The appeal outcome is one of the most telling metrics for patent prosecution analytics. Here’s why

Big data is slated to revolutionize all aspects society, and patent prosecution is no exception. But because of the complexity of patent prosecution, insights from big data must be carefully considered. Here we look at why appeals outcomes are one of the most telling metrics: it shows good insights with few misleading explanations.

Because much of patent data is publicly available, some companies are amassing troves of patent data. And some metrics that suggest insight are relatively easy to calculate.

Take the allowance rate. You can compare an examiner’s total granted patents to the total applications and voila. In some circles a high allowance rate is a good thing for both examiner and applicant. Under this theory, an Examiner with a high allowance rate is reasonable and applicant-friendly. On the same token, a law firm with a high allowance rate is good. This theory also holds that an examiner or law firm is bad because of a low allowance rate.

Another metric is the speed of prosecution. Take the average time it takes to grant an application either in months, office actions, and/or RCEs. Under a theory on speed, an examiner or law firm with a faster time to allowance is better.

While these theories could be true in certain situations, there are confounding explanations that arrive at the opposite insight. In sum, these are woefully incomplete insights for three reasons.

First, these metrics incorrectly assume that all patent applications are of the same quality. By contrast, examiners are assigned patent applications in a non-random manner. The same applicant (because of the similarity of subject matter) will have a large proportion of applications assigned to a single Examiner or art unit. This means that related applications from the same applicant (drafted with the help of counsel) can have very high quality or very low quality applications. Related low-quality applications can suffer from the same problems (little inventiveness, lack of novelty, poorly drafted applications) that is not dependent on the examiner. So an examiner, through no fault of his own, can get assigned a high percentage of “bad” applications. In these cases, a low allowance rate should reflect on the applications–not the examiner.

Correspondingly, clients of some law firms instruct to prosecute patent applications in a non-random fashion. Especially for cost-sensitive clients, some firms are assigned very specialized subject matter to maximize efficiency. But not all subject matter is equally patentable. So even when a law firm has a low allowance rate, it often times does not mean that the firm is doing a bad job. By contrast, the firm could be doing a better-than-average job for the subject matter, especially in light of budget constraints given by the client.

This non-random distribution of patentable subject matter undermines use of a bare allowance rate or time to allowance.

Second, the allowance rate or allowance time metrics likely show a poor quality patent. That is, one job of patent counsel is to advocate for claim breadth. But examiners will propose narrowing amendments–even if not required by the patent laws–because it makes the examiner’s job easier and the examiner does not want subsequent quality issues. So often times, a quick and easy notice of allowance merely signifies a narrow and less valuable patent. Often times, office actions include improper rejections, so a metric that shows a quick compromise can show that the law firm isn’t sufficiently advocating. Thus, using these metrics to evaluate good law firms could be telling you the opposite, depending on your goals.

Plus, getting a patent for the client is not always best serving the client’s needs. Some clients do not want to pay an issue fee and subsequent maintenance fees on a patent that will never be used because it is so narrow and limiting. So it’s a mark of good counsel when such clients are not given a patent just because a grant is the destination. The client is ideally brought in to the business decision of how this patent will generate value. The counsel that does this, and has an allowance rate that drops because of this, unfairly is portrayed as bad.

Third, these metrics lack substantive analysis. Correlating what happens with patent applications only goes so far without knowing the points at issue that led to the particular outcomes. Some applicants delay prosecution for various reasons using various techniques including filing RCEs. There are legitimate strategies that are benefitting the client in doing so.

All this is not to say that such metrics are not useful. They are. These metrics simply need a lot of context for the true insight to come through.

In contrast to the above patent prosecution metrics, Anticipat’s appeal outcome very directly evaluates relevant parties in substantive ways. The appeal outcome metric reveals what happens when an applicant believes that his position is right and when the examiner and his supervisor think that he is right. In such cases, who is right? The parties are forced to resolve the dispute with an independent judge panel. And because getting to such a final decision requires significant time and resources (eg filing briefs and one or more appeal conferences that can kick out cases before reaching the board), the stakes are relativity high. This weeds out alternate explanations for an applicant choosing to pursue an appeal. Final decisions thus stem from a desire and position to win a point of contention that the Examiner thinks he’s right on–not a whimsical exercise. And tracking this provides insights into the reasonableness of Examiner rejections and how counsel advises and advocates.

The appeal metric helps to evaluate the Examiner because if overturned on certain grounds of rejection (say, more often than average), this means something about the examiner and his supervisor’s ability to apply and assess rejections. After working with an examiner, there can come a point when the examiner will not budge. The PTAB then becomes the judge as to whether the Examiner is right or wrong. This same analysis works at evaluating whole group art units or technology centers.

With Practitioner Analytics, you can see the reversal rates of specific Examiners, art units, tech centers, but you can also look at the specific arguments used that have been found to be persuasive at the Board. This means that if you are dealing with a specific point of contention, say “motivation to combine” issue of obviousness, you can pull up all the cases where the PTAB overturned your examiner based on this point. The overall reversal rate and specific rationales can both validate that the Examiner is not in line with the patent laws and rules.

2

The appeal metric also helps evaluate counsel because it shows the results of when counsel feels that their position is right even when the examiner will not budge. If counsel wins on appeal, this metric confirms the counsel’s judgment and their ability to make persuasive arguments in briefs.

Even a ninja can generate allowance rate metrics. But the savvy patent practitioner looks for more context to guide prosecution strategy. Insights from the data that are carefully analyzed avoid counter-intuitive explanations.