Ex parte Decisions have been Updated on Anticipat

We have been analyzing ex parte decisions at the PTAB for many years now. So for every day, we can see the decisions that have been imported from the USPTO. This came in handy a few months ago when USPTO personnel told us that they completed a migration of all ex parte PTAB decisions to a modernized webpage. While we were excited for this new functionality (including a RESTful API), we started noticing abnormalities in the data.

For example, July 2019 (the month when the transition to the new page took place) only had 150 decisions. That was a much lower number of appeal decisions than we were used to seeing. The next month, August, had even fewer with 109 such decisions. By contrast, June 2019 (the month before the transition took effect), had 732 ex parte decisions. This is in line with prior months, even though it is not uncommon for busy end-of-quarter months to exceed 1000 decisions.
But just to give you the fairest comparison, the prior year of the same month, July 2018, had 771 such decisions and June 2018 had 766 decisions. So to have only 150 decisions for July 2019 and even fewer decisions for August seemed strange to us. With such a dramatic decrease of the historical volume of these decisions, it seemed highly unlikely the cause would be from a sudden drop in output by the PTAB. 
We reached out to the USPTO personnel with our findings and they confirmed that there was a glitch that they would resolve. Several weeks later–in fact last Friday–the missing decisions for the last few months were replenished on the USPTO page. Our importer was hungrily back to action.
With so many decisions to process in one business day, our daily recap email came out for this Monday in an abnormal way. But by Tuesday, we were back to our normal daily email, showing that the USPTO published 66 decisions in one day.
Get this fresh recap of PTAB decisions delivered straight to your inbox by signing up for an Anticipat membership.

Now that we have an updated list of decisions for the past several months, we will continue posting trends and insights about appealed decisions. If you are interested in trying out the Anticipat Research database for yourself, sign up for a 14-day free trial here: https://anticipat.com/accounts/signup

Anticipat has fully transitioned to new data source for PTAB appeals data

Over the past few years, the USPTO has modernized its data offerings with new websites and APIs. Just recently, the USPTO confirmed completion of a new PTAB site with successfully migrated appeal decisions. Anticipat has now fully transitioned to using this new data source. 

Over the years, many practitioners have become familiar with the old ptab efoia webpage. Its 1990s-style interface and functionality made an indelible impression. In fact, we previously discussed seven key shortcomings of the PTO webpage. But now its life is all but gone.

Several weeks ago, we noticed that new decisions were no longer being published to the beleaguered webpage for appeal decisions (https://e-foia.uspto.gov/Foia/PTABReadingRoom.jsp). We also saw reference to a new page on the old webpage (circled in yellow below). 

In response to us reaching out to USPTO personnel, the Office of the General Counsel at the USPTO confirmed that they had completed the migration of this data to their new webpage (https://developer.uspto.gov/ptab-web/#/search/decisions).

We will continue to provide the same type of curated content for ex parte appeal decisions at the PTAB. Only now with a more modern and robust data source, the potential insights are even greater. Stay tuned for developments!

Get fresh ex parte PTAB decisions delivered to your inbox

For those who like to stay current on the latest appeals decided by the PTAB, we have good news. Anticipat Email Recap just rolled out a major new feature. Now, rather than waiting for a particular decision date to be populated, you can get all the organized and curated decisions from the previous day delivered right when you start off the day.

For some time, the USPTO published their appeals decisions in a somewhat reliable and timely manner. We could delivery recaps based on decision date, with some sort of a lag. But as the lag time grew larger and larger, much of the value of having these emails diminished. Now, you can see every decision that was published on efoia webpage–a less than 24 hour turnaround time.

For those interested in recap covering longer spans of time, we have built in that functionality preference.

You can also apply a filter for only decisions that meet a certain criteria, e.g., from a particular art unit, examiner, issue type, outcome. This can minimize emails that aren’t relevant to you or your practice.

As many are aware, our world has too much information. We’d like to help you get the most value out of PTAB appeal decisions. Give the new fresh recap a try here: https://anticipat.com/accounts/signup/research/

For Obviousness, Some Things Change but the Board Statistics Remain the Same

The Anticipat research database continues to comprehensively cover all legal grounds of rejection considered by the Patent Trial and Appeal Board (PTAB) for its ex parte appeals decision. This includes the more exotic like statutory subject matter Section 101 cases to the much more common issues like obviousness (Section 103).

Since July 25, 2016 to February 28, 2019, there have been around 24,448 obviousness decisions from the 29,102 total decisions meaning that nearly 84% of all appeals involve obviousness. Our observation is that obviousness is the most common ground of rejection to be decided at the Board. In this post, the data considered excludes decisions where the outcome involved a new ground of rejection based on obviousness as these typically only form a small fraction of the cases.

Of the 24,448 total decisions, 12,369 were affirmed, 8,386 were reversed, and 2354 were affirmed in part. Thus, the wholly reversed rate (for all claims in a case based on obviousness) was about 34%. The at least partially reversed rate (at least one claim in the case was found patentable) is about 44% (43.9%).

What is interesting to note is the breakdown by technology center in the USPTO. The technology centers contain the various art units to which patent cases are assigned based on how the claimed technology in each patent application is classified by the USPTO. Here is the summary of obviousness cases broken down by technology center based on cases decided between July 25, 2016 to February 28, 2019.

TC 1600 (biotech/pharma): Total decisions: 2333; 1907 total obviousness decisions. 1151 affirmed, 132 affirmed in part, 492 reversed. Wholly reversed rate is 25.8% and at least partially reversed rate is 32.7%.

1700 (chemical): Total decisions: 3950; 3633 total obviousness decisions. 2103 affirmed, 280 affirmed in part, 1041 reversed. Wholly reversed rate is 28.7% and at least partially reversed rate is 36%.

2100 (computer/electrical): Total decisions: 2976; 2575 total obviousness decisions. 1467 affirmed, 228 affirmed in part, and 739 reversed. Wholly reversed rate is 28.7% and at least partially reversed rate is 37.6%.

2400(computer/electrical): Total decisions: 3164; 2791 total obviousness decisions. 1531 affirmed, 276 affirmed in part, and 852 reversed. Wholly reversed rate is 30.5% and at least partially reversed rate is 40.4%.

2600(computer/electrical): Total decisions: 2760; 2458 total obviousness decisions. 1420 affirmed, 241 affirmed in part, and 677 reversed. Wholly reversed 27.5% and at least partially reversed rate is 37.3%.

2800(computer/electrical): Total appealed decisions: 1979; 1648 total obviousness decisions. 813 affirmed, 128 affirmed in part, 583 reversed. Wholly reversed 35.4% and at least partially reversed rate is 43%.

3600 (business methods/software): Total appealed decisions: 5941; 4212 total obviousness decisions. 1842 affirmed, 418 affirmed-in-part, 1755 reversed. Wholly reversed rate is 41.7% and at least partially reversed rate is 51.6%.

3700 (medical device/mechanical): Total decisions: 5734. 5022 total obviousness decisions. 1929 affirmed, 625 affirmed-in-part, 2195 reversed. Wholly reversed rate is 43.7%. At least partial reversal rate is 56.1%.

We note that the data has not shifted more than a couple of percentage points from our review of the data as reported in the past despite the increase in number of decisions. This indicates that, with respect to obviousness, by and large the Examiners (and the two supervisory Examiners involved in the Appeal Conferences) are very consistent in 1) picking the same kinds of bad decisions to take on appeal and the Board is 2) agreeing with applicants that this is the case at the same consistent rate. While some things like the particular cases being taken on appeal today have changed, the behavior by the USPTO has stayed markedly the same.

As a general observation, in the private sector a process producing 34% outright defective parts and 44% partially defective parts as determined by its own internal quality control process (the Board) would likely immediately regarded as being a low quality, unpredictable process (and would probably put a company out of business). At Anticipat, we believe that the ex parte appeals statistics are the closest and best end of line quality control indicator of the quality of the USPTO’s patent examination process. However, since these statistics are just an end of line quality control indicator, trying to change Examiner behavior solely using the ex parte appeals statistics will not solve the quality problem—reducing examiner variability will require the USPTO identifying meaningful inline statistical data monitors (pre-appeal) that could be used to reduce the variability in the examination process. The inline data monitors are what the USPTO could use to reduce the current levels of examination variability–and the end of line data (the ex parte appeals statistics) will show the effect too.

Could such a statistics-based process be implemented at the USPTO? Certainly–but it can only begin when the agency acknowledges that statistics like these reflect unmistakably on the quality and predictability of the patents currently granted. Tightening the USPTO’s distribution to result in lower ex parte appeals reversals will inevitably (through the operation of statistics itself) result in more predictable and better quality issued patents.  Until then, the only predictable thing is that USPTO Examiners will continue to be reversed by the PTAB at these double digit rates for obviousness.

Top 10 Anticipat Blog Posts for 2018

With a new year upon us, it’s sometimes interesting to look backwards. 2018 for us was a year of good blogging. Here, we recap the most popular posts on Anticipat’s blog in the year of 2018.

Top 10 most visited posts in 2018 in order of highest unique page views

1) The PTAB quietly hit a milestone in June in reversing Alice Section 101 rejections

2) Update: These firms overturn abstract idea (Alice) rejections on appeal at PTAB

3) Understanding the Examiner Answer: analyze anything new and contest as needed

4) Berkheimer’s biggest effect on PTAB outcomes

5) How the biggest patent firms (Finnegan, Fish, Knobbe) do on appeal

6) Obviousness Reversal Rates Across Tech Centers: Unexpected Results

7) Expect the Berkheimer-driven patent-eligibility pendulum to swing at the PTAB

8) Business methods making comeback on appeal at the Board–Citing Berkheimer PTAB panel holds Examiner must show evidence

9) Board panel citing Berkheimer to reverse judicial exception rejection to diagnostics claims: no evidence

10) Number of abstract idea rejections decided at PTAB for August 2018 higher than ever, but reversal rate treads water

Of course, the order of these posts does not completely correlate with the most interesting or relevant content. Some of the popular posts were published at the beginning of the year, with more time to be accessed, while other posts were published late in the year. Also, some posts were arbitrarily provided to be shared on higher-profile media, giving it a broader audience.

A big lesson from these posts is that patent-eligibility, Berkheimer and abstract ideas were very interesting topics in 2018.

Anticipat blog recognized as top 100 IP blog

After a year and a half of posting, this blog is starting to get recognized. In addition to traffic growth, Anticipat blog has been selected as one of the Top 100 Intellectual Property Blogs on the web by Feedspot. See https://blog.feedspot.com/intellectual_property_blogs/

Feedspot’s top 100 list for IP blogs is a very comprehensive list of Top 100 Intellectual Property Blogs on the internet. Anticipat comes in at #74, at a rate of about a blog post a week. 

This list highlights that there are many good IP blogs to follow. In fact, we include a section in the right sidebar under BLOGS TO FOLLOW with a short list of some of these IP blogs to follow. 

Stay tuned for many more interesting and relevant posts. We will continue providing content to be practical to the patent prosecutor. 

The appeal outcome is one of the most telling metrics for patent prosecution analytics. Here’s why

Big data is slated to revolutionize all aspects society, and patent prosecution is no exception. But because of the complexity of patent prosecution, insights from big data must be carefully considered. Here we look at why appeals outcomes are one of the most telling metrics: it shows good insights with few misleading explanations.

Because much of patent data is publicly available, some companies are amassing troves of patent data. And some metrics that suggest insight are relatively easy to calculate.

Take the allowance rate. You can compare an examiner’s total granted patents to the total applications and voila. In some circles a high allowance rate is a good thing for both examiner and applicant. Under this theory, an Examiner with a high allowance rate is reasonable and applicant-friendly. On the same token, a law firm with a high allowance rate is good. This theory also holds that an examiner or law firm is bad because of a low allowance rate.

Another metric is the speed of prosecution. Take the average time it takes to grant an application either in months, office actions, and/or RCEs. Under a theory on speed, an examiner or law firm with a faster time to allowance is better.

While these theories could be true in certain situations, there are confounding explanations that arrive at the opposite insight. In sum, these are woefully incomplete insights for three reasons.

First, these metrics incorrectly assume that all patent applications are of the same quality. By contrast, examiners are assigned patent applications in a non-random manner. The same applicant (because of the similarity of subject matter) will have a large proportion of applications assigned to a single Examiner or art unit. This means that related applications from the same applicant (drafted with the help of counsel) can have very high quality or very low quality applications. Related low-quality applications can suffer from the same problems (little inventiveness, lack of novelty, poorly drafted applications) that is not dependent on the examiner. So an examiner, through no fault of his own, can get assigned a high percentage of “bad” applications. In these cases, a low allowance rate should reflect on the applications–not the examiner.

Correspondingly, clients of some law firms instruct to prosecute patent applications in a non-random fashion. Especially for cost-sensitive clients, some firms are assigned very specialized subject matter to maximize efficiency. But not all subject matter is equally patentable. So even when a law firm has a low allowance rate, it often times does not mean that the firm is doing a bad job. By contrast, the firm could be doing a better-than-average job for the subject matter, especially in light of budget constraints given by the client.

This non-random distribution of patentable subject matter undermines use of a bare allowance rate or time to allowance.

Second, the allowance rate or allowance time metrics likely show a poor quality patent. That is, one job of patent counsel is to advocate for claim breadth. But examiners will propose narrowing amendments–even if not required by the patent laws–because it makes the examiner’s job easier and the examiner does not want subsequent quality issues. So often times, a quick and easy notice of allowance merely signifies a narrow and less valuable patent. Often times, office actions include improper rejections, so a metric that shows a quick compromise can show that the law firm isn’t sufficiently advocating. Thus, using these metrics to evaluate good law firms could be telling you the opposite, depending on your goals.

Plus, getting a patent for the client is not always best serving the client’s needs. Some clients do not want to pay an issue fee and subsequent maintenance fees on a patent that will never be used because it is so narrow and limiting. So it’s a mark of good counsel when such clients are not given a patent just because a grant is the destination. The client is ideally brought in to the business decision of how this patent will generate value. The counsel that does this, and has an allowance rate that drops because of this, unfairly is portrayed as bad.

Third, these metrics lack substantive analysis. Correlating what happens with patent applications only goes so far without knowing the points at issue that led to the particular outcomes. Some applicants delay prosecution for various reasons using various techniques including filing RCEs. There are legitimate strategies that are benefitting the client in doing so.

All this is not to say that such metrics are not useful. They are. These metrics simply need a lot of context for the true insight to come through.

In contrast to the above patent prosecution metrics, Anticipat’s appeal outcome very directly evaluates relevant parties in substantive ways. The appeal outcome metric reveals what happens when an applicant believes that his position is right and when the examiner and his supervisor think that he is right. In such cases, who is right? The parties are forced to resolve the dispute with an independent judge panel. And because getting to such a final decision requires significant time and resources (eg filing briefs and one or more appeal conferences that can kick out cases before reaching the board), the stakes are relativity high. This weeds out alternate explanations for an applicant choosing to pursue an appeal. Final decisions thus stem from a desire and position to win a point of contention that the Examiner thinks he’s right on–not a whimsical exercise. And tracking this provides insights into the reasonableness of Examiner rejections and how counsel advises and advocates.

The appeal metric helps to evaluate the Examiner because if overturned on certain grounds of rejection (say, more often than average), this means something about the examiner and his supervisor’s ability to apply and assess rejections. After working with an examiner, there can come a point when the examiner will not budge. The PTAB then becomes the judge as to whether the Examiner is right or wrong. This same analysis works at evaluating whole group art units or technology centers.

With Practitioner Analytics, you can see the reversal rates of specific Examiners, art units, tech centers, but you can also look at the specific arguments used that have been found to be persuasive at the Board. This means that if you are dealing with a specific point of contention, say “motivation to combine” issue of obviousness, you can pull up all the cases where the PTAB overturned your examiner based on this point. The overall reversal rate and specific rationales can both validate that the Examiner is not in line with the patent laws and rules.

2

The appeal metric also helps evaluate counsel because it shows the results of when counsel feels that their position is right even when the examiner will not budge. If counsel wins on appeal, this metric confirms the counsel’s judgment and their ability to make persuasive arguments in briefs.

Even a ninja can generate allowance rate metrics. But the savvy patent practitioner looks for more context to guide prosecution strategy. Insights from the data that are carefully analyzed avoid counter-intuitive explanations.