For Obviousness, Some Things Change but the Board Statistics Remain the Same

The Anticipat research database continues to comprehensively cover all legal grounds of rejection considered by the Patent Trial and Appeal Board (PTAB) for its ex parte appeals decision. This includes the more exotic like statutory subject matter Section 101 cases to the much more common issues like obviousness (Section 103).

Since July 25, 2016 to February 28, 2019, there have been around 24,448 obviousness decisions from the 29,102 total decisions meaning that nearly 84% of all appeals involve obviousness. Our observation is that obviousness is the most common ground of rejection to be decided at the Board. In this post, the data considered excludes decisions where the outcome involved a new ground of rejection based on obviousness as these typically only form a small fraction of the cases.

Of the 24,448 total decisions, 12,369 were affirmed, 8,386 were reversed, and 2354 were affirmed in part. Thus, the wholly reversed rate (for all claims in a case based on obviousness) was about 34%. The at least partially reversed rate (at least one claim in the case was found patentable) is about 44% (43.9%).

What is interesting to note is the breakdown by technology center in the USPTO. The technology centers contain the various art units to which patent cases are assigned based on how the claimed technology in each patent application is classified by the USPTO. Here is the summary of obviousness cases broken down by technology center based on cases decided between July 25, 2016 to February 28, 2019.

TC 1600 (biotech/pharma): Total decisions: 2333; 1907 total obviousness decisions. 1151 affirmed, 132 affirmed in part, 492 reversed. Wholly reversed rate is 25.8% and at least partially reversed rate is 32.7%.

1700 (chemical): Total decisions: 3950; 3633 total obviousness decisions. 2103 affirmed, 280 affirmed in part, 1041 reversed. Wholly reversed rate is 28.7% and at least partially reversed rate is 36%.

2100 (computer/electrical): Total decisions: 2976; 2575 total obviousness decisions. 1467 affirmed, 228 affirmed in part, and 739 reversed. Wholly reversed rate is 28.7% and at least partially reversed rate is 37.6%.

2400(computer/electrical): Total decisions: 3164; 2791 total obviousness decisions. 1531 affirmed, 276 affirmed in part, and 852 reversed. Wholly reversed rate is 30.5% and at least partially reversed rate is 40.4%.

2600(computer/electrical): Total decisions: 2760; 2458 total obviousness decisions. 1420 affirmed, 241 affirmed in part, and 677 reversed. Wholly reversed 27.5% and at least partially reversed rate is 37.3%.

2800(computer/electrical): Total appealed decisions: 1979; 1648 total obviousness decisions. 813 affirmed, 128 affirmed in part, 583 reversed. Wholly reversed 35.4% and at least partially reversed rate is 43%.

3600 (business methods/software): Total appealed decisions: 5941; 4212 total obviousness decisions. 1842 affirmed, 418 affirmed-in-part, 1755 reversed. Wholly reversed rate is 41.7% and at least partially reversed rate is 51.6%.

3700 (medical device/mechanical): Total decisions: 5734. 5022 total obviousness decisions. 1929 affirmed, 625 affirmed-in-part, 2195 reversed. Wholly reversed rate is 43.7%. At least partial reversal rate is 56.1%.

We note that the data has not shifted more than a couple of percentage points from our review of the data as reported in the past despite the increase in number of decisions. This indicates that, with respect to obviousness, by and large the Examiners (and the two supervisory Examiners involved in the Appeal Conferences) are very consistent in 1) picking the same kinds of bad decisions to take on appeal and the Board is 2) agreeing with applicants that this is the case at the same consistent rate. While some things like the particular cases being taken on appeal today have changed, the behavior by the USPTO has stayed markedly the same.

As a general observation, in the private sector a process producing 34% outright defective parts and 44% partially defective parts as determined by its own internal quality control process (the Board) would likely immediately regarded as being a low quality, unpredictable process (and would probably put a company out of business). At Anticipat, we believe that the ex parte appeals statistics are the closest and best end of line quality control indicator of the quality of the USPTO’s patent examination process. However, since these statistics are just an end of line quality control indicator, trying to change Examiner behavior solely using the ex parte appeals statistics will not solve the quality problem—reducing examiner variability will require the USPTO identifying meaningful inline statistical data monitors (pre-appeal) that could be used to reduce the variability in the examination process. The inline data monitors are what the USPTO could use to reduce the current levels of examination variability–and the end of line data (the ex parte appeals statistics) will show the effect too.

Could such a statistics-based process be implemented at the USPTO? Certainly–but it can only begin when the agency acknowledges that statistics like these reflect unmistakably on the quality and predictability of the patents currently granted. Tightening the USPTO’s distribution to result in lower ex parte appeals reversals will inevitably (through the operation of statistics itself) result in more predictable and better quality issued patents.  Until then, the only predictable thing is that USPTO Examiners will continue to be reversed by the PTAB at these double digit rates for obviousness.

Top 10 Anticipat Blog Posts for 2018

With a new year upon us, it’s sometimes interesting to look backwards. 2018 for us was a year of good blogging. Here, we recap the most popular posts on Anticipat’s blog in the year of 2018.

Top 10 most visited posts in 2018 in order of highest unique page views

1) The PTAB quietly hit a milestone in June in reversing Alice Section 101 rejections

2) Update: These firms overturn abstract idea (Alice) rejections on appeal at PTAB

3) Understanding the Examiner Answer: analyze anything new and contest as needed

4) Berkheimer’s biggest effect on PTAB outcomes

5) How the biggest patent firms (Finnegan, Fish, Knobbe) do on appeal

6) Obviousness Reversal Rates Across Tech Centers: Unexpected Results

7) Expect the Berkheimer-driven patent-eligibility pendulum to swing at the PTAB

8) Business methods making comeback on appeal at the Board–Citing Berkheimer PTAB panel holds Examiner must show evidence

9) Board panel citing Berkheimer to reverse judicial exception rejection to diagnostics claims: no evidence

10) Number of abstract idea rejections decided at PTAB for August 2018 higher than ever, but reversal rate treads water

Of course, the order of these posts does not completely correlate with the most interesting or relevant content. Some of the popular posts were published at the beginning of the year, with more time to be accessed, while other posts were published late in the year. Also, some posts were arbitrarily provided to be shared on higher-profile media, giving it a broader audience.

A big lesson from these posts is that patent-eligibility, Berkheimer and abstract ideas were very interesting topics in 2018.

Anticipat blog recognized as top 100 IP blog

After a year and a half of posting, this blog is starting to get recognized. In addition to traffic growth, Anticipat blog has been selected as one of the Top 100 Intellectual Property Blogs on the web by Feedspot. See https://blog.feedspot.com/intellectual_property_blogs/

Feedspot’s top 100 list for IP blogs is a very comprehensive list of Top 100 Intellectual Property Blogs on the internet. Anticipat comes in at #74, at a rate of about a blog post a week. 

This list highlights that there are many good IP blogs to follow. In fact, we include a section in the right sidebar under BLOGS TO FOLLOW with a short list of some of these IP blogs to follow. 

Stay tuned for many more interesting and relevant posts. We will continue providing content to be practical to the patent prosecutor. 

The appeal outcome is one of the most telling metrics for patent prosecution analytics. Here’s why

Big data is slated to revolutionize all aspects society, and patent prosecution is no exception. But because of the complexity of patent prosecution, insights from big data must be carefully considered. Here we look at why appeals outcomes are one of the most telling metrics: it shows good insights with few misleading explanations.

Because much of patent data is publicly available, some companies are amassing troves of patent data. And some metrics that suggest insight are relatively easy to calculate.

Take the allowance rate. You can compare an examiner’s total granted patents to the total applications and voila. In some circles a high allowance rate is a good thing for both examiner and applicant. Under this theory, an Examiner with a high allowance rate is reasonable and applicant-friendly. On the same token, a law firm with a high allowance rate is good. This theory also holds that an examiner or law firm is bad because of a low allowance rate.

Another metric is the speed of prosecution. Take the average time it takes to grant an application either in months, office actions, and/or RCEs. Under a theory on speed, an examiner or law firm with a faster time to allowance is better.

While these theories could be true in certain situations, there are confounding explanations that arrive at the opposite insight. In sum, these are woefully incomplete insights for three reasons.

First, these metrics incorrectly assume that all patent applications are of the same quality. By contrast, examiners are assigned patent applications in a non-random manner. The same applicant (because of the similarity of subject matter) will have a large proportion of applications assigned to a single Examiner or art unit. This means that related applications from the same applicant (drafted with the help of counsel) can have very high quality or very low quality applications. Related low-quality applications can suffer from the same problems (little inventiveness, lack of novelty, poorly drafted applications) that is not dependent on the examiner. So an examiner, through no fault of his own, can get assigned a high percentage of “bad” applications. In these cases, a low allowance rate should reflect on the applications–not the examiner.

Correspondingly, clients of some law firms instruct to prosecute patent applications in a non-random fashion. Especially for cost-sensitive clients, some firms are assigned very specialized subject matter to maximize efficiency. But not all subject matter is equally patentable. So even when a law firm has a low allowance rate, it often times does not mean that the firm is doing a bad job. By contrast, the firm could be doing a better-than-average job for the subject matter, especially in light of budget constraints given by the client.

This non-random distribution of patentable subject matter undermines use of a bare allowance rate or time to allowance.

Second, the allowance rate or allowance time metrics likely show a poor quality patent. That is, one job of patent counsel is to advocate for claim breadth. But examiners will propose narrowing amendments–even if not required by the patent laws–because it makes the examiner’s job easier and the examiner does not want subsequent quality issues. So often times, a quick and easy notice of allowance merely signifies a narrow and less valuable patent. Often times, office actions include improper rejections, so a metric that shows a quick compromise can show that the law firm isn’t sufficiently advocating. Thus, using these metrics to evaluate good law firms could be telling you the opposite, depending on your goals.

Plus, getting a patent for the client is not always best serving the client’s needs. Some clients do not want to pay an issue fee and subsequent maintenance fees on a patent that will never be used because it is so narrow and limiting. So it’s a mark of good counsel when such clients are not given a patent just because a grant is the destination. The client is ideally brought in to the business decision of how this patent will generate value. The counsel that does this, and has an allowance rate that drops because of this, unfairly is portrayed as bad.

Third, these metrics lack substantive analysis. Correlating what happens with patent applications only goes so far without knowing the points at issue that led to the particular outcomes. Some applicants delay prosecution for various reasons using various techniques including filing RCEs. There are legitimate strategies that are benefitting the client in doing so.

All this is not to say that such metrics are not useful. They are. These metrics simply need a lot of context for the true insight to come through.

In contrast to the above patent prosecution metrics, Anticipat’s appeal outcome very directly evaluates relevant parties in substantive ways. The appeal outcome metric reveals what happens when an applicant believes that his position is right and when the examiner and his supervisor think that he is right. In such cases, who is right? The parties are forced to resolve the dispute with an independent judge panel. And because getting to such a final decision requires significant time and resources (eg filing briefs and one or more appeal conferences that can kick out cases before reaching the board), the stakes are relativity high. This weeds out alternate explanations for an applicant choosing to pursue an appeal. Final decisions thus stem from a desire and position to win a point of contention that the Examiner thinks he’s right on–not a whimsical exercise. And tracking this provides insights into the reasonableness of Examiner rejections and how counsel advises and advocates.

The appeal metric helps to evaluate the Examiner because if overturned on certain grounds of rejection (say, more often than average), this means something about the examiner and his supervisor’s ability to apply and assess rejections. After working with an examiner, there can come a point when the examiner will not budge. The PTAB then becomes the judge as to whether the Examiner is right or wrong. This same analysis works at evaluating whole group art units or technology centers.

With Practitioner Analytics, you can see the reversal rates of specific Examiners, art units, tech centers, but you can also look at the specific arguments used that have been found to be persuasive at the Board. This means that if you are dealing with a specific point of contention, say “motivation to combine” issue of obviousness, you can pull up all the cases where the PTAB overturned your examiner based on this point. The overall reversal rate and specific rationales can both validate that the Examiner is not in line with the patent laws and rules.

2

The appeal metric also helps evaluate counsel because it shows the results of when counsel feels that their position is right even when the examiner will not budge. If counsel wins on appeal, this metric confirms the counsel’s judgment and their ability to make persuasive arguments in briefs.

Even a ninja can generate allowance rate metrics. But the savvy patent practitioner looks for more context to guide prosecution strategy. Insights from the data that are carefully analyzed avoid counter-intuitive explanations.

How often do the largest patent firms appeal?

We recently reported on eight reasons to consider filing an appeal during the course of patent prosecution. Based on the current relatively low number of appeals across all applications, we suggested that some law firms may be underutilizing the appeal procedure in their practices. Now, we report the differences among the three most active patent firms. Our methods are explained in more detail at the end.

The top firms are 1) Finnegan, Henderson, Farabow, Garrett & Dunner LLP ; 2) Fish & Richardson PC; and 3) Knobbe Martens, which came from a recently published blog post on PatentlyO on the biggest firms according to total registered patent attorneys/agents. The Patentlyo blog post put Finnegan in the lead, with Fish second, and Knobbe close behind. This doesn’t mean that these firms have the most amount of patent prosecution work, but it at least puts us in the ballpark. We report here that these three firms have far different numbers of ex parte PTAB appeals.  

From July 25, 2016 to February 22, 2018, Fish & Richardson had 143 appeals. Finnegan was second with 78 appeals. And Knobbe was third with 60 appeals. While Fish and Knobbe had roughly the same number of patent applications (60,916 and 58,170, respectively) across all customer numbers searched, Fish had more than double the appeals. Even Finnegan, which totaled a third fewer applications (41,194) than Knobbe, had more appeals than Knobbe. 

The disparate number of appeals across these firms stems from a confluence of factors. One factor relates to the law firm itself. That is, a law firm may over- or under-sell the benefits of an appeal. 

Some practitioners get comfortable at preparing Amendment office action responses because it is the most common. Knowingly or not, psychological biases could influence the response strategy that a practitioner chooses or recommends to pursue. A practitioner whose recent sucessful strategy in one case might let this success influence strategy in an unrelated case simply because the strategy is more recent. Plus, projects that have fixed or capped fees favor efficiency and practitioners may opt for work that they are efficient at doing. These biases can be reinforced by billable hour incentives because prosecution can always be continued with an RCE.

Finally, to law firms’ defense, before now there hasn’t been a way to quantify the chances of succeeding on the merits of an appeal. A wealth of experience can put someone in the general vicinity, but even then is incomplete. 

So part of the reason why firm appeal rates differ is law firm-specific.

Another factor for law firms pursuing appeals at much different levels relates to the client. Some clients simply care less about the quality of patents. To them, numbers are more important. So despite the appeal procedure having several advantages for getting a good patent, it may not be necessary for some clients’ goals. An allowed application, even with narrow unusable claims, may be good enough.  

That clients drive appeals is perhaps best shown in the unequal distribution of appeals across customer numbers within a given firm. Pockets of appeals may show up disproportionately high for one customer number and low for others, suggesting that the decision to file an appeal largely depends on the client. Some clients may not like the concept of appealing. And as every lawyer knows, even excellent advice can only go so far, after which the client makes the call. 

Further still, different firms have different clientele, and some clients have more patentable subject matter than others. Often times, appeals are pursued only after options with examiners have been exhausted. So if firms operate under this paradigm (which is not being suggested that these three firms do), the clients with the less patentable subject matter might appeal more. But part of advocating includes not only accepting the client’s money, but providing realistic feedback on patentability. 

Another reason a client may shy away from an appeal may stem from a lack of trust with the practitioner. The upfront cost of appealing is not small change. And the client may interpret a strategy suggestion to appeal as a way to extract more money from the client, even with the best of intentions. Up until now, there has not been a good way to objectively convey the chances of succeeding on appeal and advancing prosection. 

But now, with Anticipat Practitioner Analytics, you can print out an unlimited number of professional reports that show how often the board overturns specific grounds of rejection relevant to a specific examiner. These include the specific points, called tags, and the legal authority, that the Board relied on in overturning similar rejections. 

For example, take a Section 101 rejection asserting abstract idea. You believe the examiner is wrong on step 1. By looking at Anticipat, you can see where the Board has overturned abstract idea rejections based on step 1 for this examiner, art unit, or tech center. With this knowledge, you can feel more confident in advising appeal. So a data driven approach can greatly improve the advice and build trust on the strategy. 

Give Anticipat a try for a 14 day free trial. Our team is happy to provide a demo. 

Methods

We looked up customer numbers for the three firms using a publicly available dataset. We then analyzed customer numbers associated with the three firms, of which there were a lot. Finnegan has at least 55 customer numbers totaling 41,194 applications. Fish has at least over 100, totaling 60,916 applications. And Knobbe has at least 50 customer numbers, totaling 58,170. We then plugged in the customer numbers into Anticipat’s Research page and tallied up the total for the relevant window of time.

Anticipat’s Mission: Help Patent Practitioners Succeed with the Best Data

We at Anticipat have a passion for improving patent prosecution by harnessing better data. We want our users to succeed in their own practices with the help of this data.

Better data includes aggregate ex parte appeals data that is relevant to grounds of rejection practitioners face. That is, an Office Action with a particular ground of rejection with specific reasoning has very likely been overturned on appeal in another application. We connect these dots for you. 

Better data also includes more general metrics, such as the reversal rates for specific grounds of rejection for a given Examiner, art unit, tech center. It also includes having the arguments and legal authority that the Board has used in overturning specific Examiner rejections.
While much of Anticipat’s initial focus and expertise are on Board data, it is only a piece of the puzzle. Our holistic approach requires data of all facets of patent prosecution, as well as a deep understanding of the context of patent prosecution procedures. We strive to further understand the incentives and behavioral decision-making patterns of all parties involved in the patent system so that proper context of USPTO statistics is understood and applied.

Only by having the best data can you optimally guide your prosecution strategy. With this arsenal of data, you can anticipate expected outcomes and put yourself in the best position for success. We hope you’ll join us on the journey. Click here to get started.

Recent Rehearing Decision Reverses Panel’s Previous Affirmance on Section 101

Losing a Section 101 appeal at the PTAB can sting. In many cases, continued examination is off the table as further amendments may not help the cause. And appealing up to the courts involves spending a lot of time and money. But there is another option: filing a request for rehearing. A recent decision shows that this procedure is not fruitless for Section 101 rejections, even if it may seem like it is. 

Recently decided Ex parte MacKay, Appeal No. 2015-008232 (September 20, 2017) reversed a Section 101 rejection that it had previously affirmed in its initial decision. In the rehearing decision, the panel was less than wordy when it acknowledged that it realized that the relied-upon identification of an abstract idea (i.e., “rules for playing a game”) may not be affiliated with the limitations recited in the claims on appeal. Instead, the claim recites the creation of a game board surface image. The panel concluded that the record failed to adequately establish that the claims at issue are directed to an abstract idea, and the rejection under 35 U.S.C. § 101 was not sustained. 

On the face, intuition might suggest that requests for rehearing are a futile endeavor. And perhaps the numbers reflect this futility. The percentage of applications that get appealed to the PTAB is quite low, 1-2%. But the percentage of appeals where the applicant files a request for rehearing is that much lower, about 1-2% of the appealed decisions. On the surface it makes sense why this procedure is rarely used. But it should not be taken out of consideration for the following two reasons. 

First, appellants may present a new argument based on a recent relevant decision of either the Board or the Federal Circuit. But unless a case comes out that supports the appellant’s position and is directly on point, the rehearing panel can easily distinguish. Plus, with such a short window between the appeal decision and the rehearing decision, unless the Board failed to consider a key case in its original decision, it would seem less likely that an appellant’s new argument saves the day.
The second reason an appellant should consider rehearing is to show that the Board misapprehended or overlooked points. See 37 C.F.R. § 41.52(a)(1). Because the same panel of judges that rendered the initial decision rules in the request for rehearing, it might seem less likely that the panel admits that it misapprehended points in their earlier decision. But it turns out that it does work, as shown in the above case.

In conclusion, if you’re feeling out of options after an unsuccessful appeal to the PTAB, consider filing a request for rehearing. It’s fast (only a few extra months of wait time for a decision) and as shown above, there’s a chance that it helps reverse the rejection. Plus the cost is miniscule compared to appealing to the Federal Circuit or Eastern District of Virginia (the other options for seeking redress of the unfavorable PTAB decision).