The appeal outcome is one of the most telling metrics for patent prosecution analytics. Here’s why

Big data is slated to revolutionize all aspects society, and patent prosecution is no exception. But because of the complexity of patent prosecution, insights from big data must be carefully considered. Here we look at why appeals outcomes are one of the most telling metrics: it shows good insights with few misleading explanations.

Because much of patent data is publicly available, some companies are amassing troves of patent data. And some metrics that suggest insight are relatively easy to calculate.

Take the allowance rate. You can compare an examiner’s total granted patents to the total applications and voila. In some circles a high allowance rate is a good thing for both examiner and applicant. Under this theory, an Examiner with a high allowance rate is reasonable and applicant-friendly. On the same token, a law firm with a high allowance rate is good. This theory also holds that an examiner or law firm is bad because of a low allowance rate.

Another metric is the speed of prosecution. Take the average time it takes to grant an application either in months, office actions, and/or RCEs. Under a theory on speed, an examiner or law firm with a faster time to allowance is better.

While these theories could be true in certain situations, there are confounding explanations that arrive at the opposite insight. In sum, these are woefully incomplete insights for three reasons.

First, these metrics incorrectly assume that all patent applications are of the same quality. By contrast, examiners are assigned patent applications in a non-random manner. The same applicant (because of the similarity of subject matter) will have a large proportion of applications assigned to a single Examiner or art unit. This means that related applications from the same applicant (drafted with the help of counsel) can have very high quality or very low quality applications. Related low-quality applications can suffer from the same problems (little inventiveness, lack of novelty, poorly drafted applications) that is not dependent on the examiner. So an examiner, through no fault of his own, can get assigned a high percentage of “bad” applications. In these cases, a low allowance rate should reflect on the applications–not the examiner.

Correspondingly, clients of some law firms instruct to prosecute patent applications in a non-random fashion. Especially for cost-sensitive clients, some firms are assigned very specialized subject matter to maximize efficiency. But not all subject matter is equally patentable. So even when a law firm has a low allowance rate, it often times does not mean that the firm is doing a bad job. By contrast, the firm could be doing a better-than-average job for the subject matter, especially in light of budget constraints given by the client.

This non-random distribution of patentable subject matter undermines use of a bare allowance rate or time to allowance.

Second, the allowance rate or allowance time metrics likely show a poor quality patent. That is, one job of patent counsel is to advocate for claim breadth. But examiners will propose narrowing amendments–even if not required by the patent laws–because it makes the examiner’s job easier and the examiner does not want subsequent quality issues. So often times, a quick and easy notice of allowance merely signifies a narrow and less valuable patent. Often times, office actions include improper rejections, so a metric that shows a quick compromise can show that the law firm isn’t sufficiently advocating. Thus, using these metrics to evaluate good law firms could be telling you the opposite, depending on your goals.

Plus, getting a patent for the client is not always best serving the client’s needs. Some clients do not want to pay an issue fee and subsequent maintenance fees on a patent that will never be used because it is so narrow and limiting. So it’s a mark of good counsel when such clients are not given a patent just because a grant is the destination. The client is ideally brought in to the business decision of how this patent will generate value. The counsel that does this, and has an allowance rate that drops because of this, unfairly is portrayed as bad.

Third, these metrics lack substantive analysis. Correlating what happens with patent applications only goes so far without knowing the points at issue that led to the particular outcomes. Some applicants delay prosecution for various reasons using various techniques including filing RCEs. There are legitimate strategies that are benefitting the client in doing so.

All this is not to say that such metrics are not useful. They are. These metrics simply need a lot of context for the true insight to come through.

In contrast to the above patent prosecution metrics, Anticipat’s appeal outcome very directly evaluates relevant parties in substantive ways. The appeal outcome metric reveals what happens when an applicant believes that his position is right and when the examiner and his supervisor think that he is right. In such cases, who is right? The parties are forced to resolve the dispute with an independent judge panel. And because getting to such a final decision requires significant time and resources (eg filing briefs and one or more appeal conferences that can kick out cases before reaching the board), the stakes are relativity high. This weeds out alternate explanations for an applicant choosing to pursue an appeal. Final decisions thus stem from a desire and position to win a point of contention that the Examiner thinks he’s right on–not a whimsical exercise. And tracking this provides insights into the reasonableness of Examiner rejections and how counsel advises and advocates.

The appeal metric helps to evaluate the Examiner because if overturned on certain grounds of rejection (say, more often than average), this means something about the examiner and his supervisor’s ability to apply and assess rejections. After working with an examiner, there can come a point when the examiner will not budge. The PTAB then becomes the judge as to whether the Examiner is right or wrong. This same analysis works at evaluating whole group art units or technology centers.

With Practitioner Analytics, you can see the reversal rates of specific Examiners, art units, tech centers, but you can also look at the specific arguments used that have been found to be persuasive at the Board. This means that if you are dealing with a specific point of contention, say “motivation to combine” issue of obviousness, you can pull up all the cases where the PTAB overturned your examiner based on this point. The overall reversal rate and specific rationales can both validate that the Examiner is not in line with the patent laws and rules.

2

The appeal metric also helps evaluate counsel because it shows the results of when counsel feels that their position is right even when the examiner will not budge. If counsel wins on appeal, this metric confirms the counsel’s judgment and their ability to make persuasive arguments in briefs.

Even a ninja can generate allowance rate metrics. But the savvy patent practitioner looks for more context to guide prosecution strategy. Insights from the data that are carefully analyzed avoid counter-intuitive explanations.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: