Update: These firms overturn abstract idea (Alice) rejections on appeal at PTAB

(Update: Kilpatrick was previously reported as having 4 reversals; in fact, it has 7)

A previous post showcased firms that successfully appeal abstract idea rejections at the PTAB. In that post, two firms stood out as clear leaders in overcoming the most difficult ground of rejection on appeal, Section 101 abstract idea. These firms were Schwegman Lundberg Woessner and Morgan Lewis. Five months later, we update the top firms to now add Kilpatrick Townsend and provide additional context of how many appeals it took to get there, with the aid of a recently introduced Customer Number lookup functionality.

Total Reversals for Abstract Idea Rejections (Numerator)

In an almost 2-year span post-Alice (July 25, 2016-April 30, 2018), there were 189 reversed abstract idea rejections on appeal at the PTAB. Of these, three firms–Schwegman, Morgan Lewis and Kilpatrick Townsend–were responsible for 11% of these reversals, with 7 reversals each. This is far ahead of the rest of firms. For context, the next closest firm had 3 abstract idea reversals on appeal. We discuss each of these three firms in more detail.

Total Abstract Idea Appeals (Denominator)

The first firm, Schwegman, took 42 abstract idea appeals to get its 7 reversals. This means that the reverse rate is 17%. This is a higher rate (more successful) than the average reverse rates for abstract ideas. From a comparison to other big patent firms, Schwegman pursues appeals for abstract idea rejections a lot more by a long shot. For comparison, during this window Knobbe Martens had 6 total abstract idea appeal decisions; and Fish & Richardson and Finnegan each had 19 total abstract idea appeal decisions.

But even with a more aggressive appeal strategy, Schwegman still maintains a higher-than-average reversal rate. And from the 204 total appealed decisions, almost a quarter have an abstract idea rejection. This suggests that a focus of the overall appeals includes in abstract idea rejections. Here is the firm’s information filters on the Anticipat Research page and the link to the Schwegman-filtered page here


The second firm, Morgan Lewis, took far fewer appeals to get to its 7 reversals. It only appealed eight cases to get seven reversals. This translates into a reversal rate of 88% for abstract idea rejections. For a firm as big as Morgan Lewis, having only eight abstract idea appealed decisions is low compared to firms that are comparable in number of applications: Schwegman, Finnegan, Fish, Kilpatrick and Knobbe.

The overall number of appeals for Morgan Lewis during this time period is 52. This suggests that Morgan Lewis is conservative in pursuing ex parte appeals–not only for abstract idea rejections but in general. But when Morgan Lewis does proceed to appeal with a case (at least for Section 101 abstract idea rejections), it is very good at overturning such rejections. Again, the Research page and the Morgan Lewis-filtered Research page here.


The third firm, Kilpatrick, took 40 abstract idea rejections to get to its 7 reversals. This reversal rate of 18% is slightly above average, suggesting that Kilpatrick aggressively pursues appeals for this type of rejection. From 170 total appeals during that time period, it shows that abstract ideas make up a sizable part of the appealed rejections.  Kilpatrick Townsend-filtered Research page here.



Each firm should be commended on the high number of abstract idea reversals. With such a difficult rejection, these firms are showing that one avenue of overcoming the rejection is by going straight to the Board for relief.

Context is extremely important for these statistics. Just because a particular firm has a higher reversal rate than another firm does not necessarily mean that the higher reversal rate firm is better. Perhaps the lower reversal rate firm is taking on more difficult cases. Perhaps the lower reversal rate firm had victories earlier in prosecution (like at the pre-appeal conference or appeal conference or even by responding to an Office Action) that are not counted in these statistics. But these statistics do show that when the Examiner conferees believe that an abstract idea rejection is proper, these firms know how to pursue a favorable outcome for their clients.

With a user account to Anticipat (sign up here for a free trial), you can lookup the above-discussed listing of reversed abstract idea decisions using the following links.





How the biggest patent firms (Finnegan, Fish, Knobbe) do on appeal

We recently reported that the top patent firms (by registered practitioner as featured on a Patentlyo post) pursue ex parte appeals very differently. This, despite apparent equal knowledge of the benefits of pursuing an appeal to further prosecution. While this finding is interesting, pursuing an appeal and winning on appeal are two different things. Here we report on the differences in appeal outcomes along the three firms Finnegan, Fish & Richardson, and Knobbe Martens.

As brief background, we have found that average reversal rates among the various grounds of rejection to be quite stable. In a recent post, we reported that across the entire USPTO, § 101 has about a 20% reversal rate on appeal, §§ 102 and 112 hover at about 50%, and § 103 is around 33%. To look at these firms’ outcomes, we used Anticipat’s Research database and Practitioner Analytics between July 25, 2016 through March 12, 2018.

The firm-specific appeals outcome data

The above three firms exceed the USPTO reversal rates in almost all aspects. Finnegan first.

Ground of rejection Affirmed Affirmed in part Reversed Reverse Rate
 § 101 12 2 14%
 § 102 4 12 75%
 § 103 28 3 28 47%
 § 112 1 0 9 90%
OTDP 5 1 17%

For the 59 rejections of § 103 obviousness, 28 were wholly reversed. This is a complete reversal rate of 47%, much higher than the average rate. 

Of 16 decisions deciding § 102 anticipation, 12 were wholly reversed. This is a reversal rate of 75%, again much higher than the average rate.

The § 112 rejections were decisively overturned. Of 10 rejections, 9 were overturned. This translates into an 90% reversal rate–very high.

Section 101 was the one ground of rejection that underperformed the rest of the grounds of rejection. Of the 14 decisions deciding § 101 rejections, Finnegan had two reversed. This is a reversal rate of 14%, slightly lower than the average.


These data show that for the most part, Finnegan knows how to pick good candidates for appeal and/or how to advocate for a favorable outcome.

Next Fish. Of the firms analyzed today, Fish has the greatest tolerance for pursuing appeals. Because of this, the reversal rates for some rejections are lower. But this also means an overall greater number of reversals than the other firms.

Ground of rejection Affirmed Affirmed in part Reversed Reverse Rate
 § 101 13 2 13%
 § 102 6  2 16 66%
 § 103 56 14 50 43%
 § 112 10 8 44%
OTDP 13 1 7%

These data for Fish show slightly lower reversal rates than Finnegan. The reversal rate for § 102 is 66% which is still much higher than average.


Next, the reversal rate for § 103 is also higher than average at 46% wholly reversed.


Next, § 112 falls right under average with a 44% reverse rate.

Next, § 101 has two reversals. Given that there were 15 total decisions with § 101 rejections, this is a 13% reversal rate, which falls slightly below the average reversal rate for this rejection.

Fish’s appeal strategy more closely resembles the Michael Jordan quote: “You miss 100% of the shots you do not take.” Similar to basketball shots, the more a correspondent appeals, the lower the success rate may be. As we have previously reported, Fish has a much greater number of total appeals, implying it pursues appeals more frequently than the others. This may mean that the firm pursues appeals that it is not as confident that it will win on. But like basketball shooters, because of the volume of appeals, it means that more applications will be reversed and thus take advantage of the benefits of pursuing an appeal.

Fish should be recognized for taking more cases to appeal and still succeeding at or close to average reversal rates. If Fish wanted to identify areas for improvement, perhaps § 101 and § 112 would be on the list. While certainly not bad to overturn the Examiner’s 112 rejection 50% of the time, that is about the percentage of all appellants. Same is true for § 101, notwithstanding the challenges of appealing abstract idea rejections.

The third firm is Knobbe. Knobbe files the fewest number of appeals among the three firms. The data show very high reverse rates, but because of pursuing fewer appeals compared to the other firms discussed here, the result is fewer total reversals.

Ground of rejection Affirmed Affirmed in part Reversed Reverse Rate
 § 101 3 2 40%
 § 102 2 5 71%
 § 103 17 6 29 56%
 § 112 2 0 5 71%
OTDP 5 1 17%

For § 101 rejections, Knobbe has two reversals out of 5, meaning a relatively high reversal rate of 40%. It achieves the same number of reversals with much fewer appeals than Fish and Finnegan.

For § 102, 71% of rejections are wholly overturned. This is higher than the average even though not as high as Finnegan and Fish.

For § 103, 56% of rejections are wholly overturned. This is much higher than the other two firms. Factoring in the six affirmed-in-part rejections on § 103, the number of decisions where at least one claim was reversed is at 67%.

For § 112, 71% of rejections are overturned. This is much higher than average.

The data on Knobbe seem to show something of their appeal strategy. That is, Knobbe tends to appeal cases that they are more sure that they will win on. And it shows in the high reversal rates.

One note about all firms is the reversal rates for obviousness-type double patenting. All these rates are low, but many of the rejections were not argued on appeal. Thus, these summary affirmances imply that the applicant may opt to file a Terminal Disclaimer to overcome the rejection.

What it all means

These and other law firm-specific outcome data are important in two ways. First, having detailed outcomes for specific grounds of rejection can point out areas for improvement. If a firm is below the average reversal rate for a grounds of rejection (especially below the average for a particular examiner or art unit), there might be some learning opportunities to increase the reversal rate. For example, an area for improvement for Finnegan could be § 101. That being said, achieving two reversals in 15 attempts is not bad.

Anticipat Practitioner Analytics provided an easy search interface that can retrieve all reversed applications for a particular ground of rejection, for an Examiner, his art unit, or technology center. Further, it allows for any customer numbers to be queried if you want to competitively analyze certain filers.


The law firm or customer number analytics has its limitations. For example, just because a law firm has a low reversal rate for § 101 does not mean that they are of worse quality counsel than others. They may, for example, be taking on more difficult cases. But having the internal context of these cases can powerfully guide prosecution strategy.

The second way these data are important in helping to show the optimal level an applicant should appeal. Remember that how often to pursue an appeal affects the reversal rates. A firm such as Knobbe that is more selective in its pursuit of an appeal than a firm like Fish–all advocacy being equal–will have a higher reversal rate, but fewer reversed decisions.

For example, let’s say there are 100 candidate applications that could go up to appeal. For simplicity sake, assume each has a single pending ground of rejection. A Knobbe type of firm only pursues an appeal for its three most egregious cases and pursues claim amendments responses for the rest. In this situation, there’s a high chance these three cases get overturned (because they are the most improper of the bunch). Let’s say that two get wholly reversed, resulting in a 66% reversal rate.

But let’s say a Fish type of firm with the same candidate applications is less choosy and decides to appeal the 10 most egregious rejections. Some of these rejections will not be overturned because they are less clearly improper rejections. So this firm could have a lower reversal rate of 50%. But because the firm decided to appeal more applications, even with a lower reversal rate, that firm is still getting three more notices of allowance with all the benefits of an appeal.

In conclusion, feel free to lookup and monitor appeal decisions using customer numbers on Practitioner Analytics. Armed with the right material, you can see where you or others are strong and see where you have opportunities. This can also be used for showcasing expertise and advocacy in certain areas, e.g., for marketing.

The appeal outcome is one of the most telling metrics for patent prosecution analytics. Here’s why

Big data is slated to revolutionize all aspects society, and patent prosecution is no exception. But because of the complexity of patent prosecution, insights from big data must be carefully considered. Here we look at why appeals outcomes are one of the most telling metrics: it shows good insights with few misleading explanations.

Because much of patent data is publicly available, some companies are amassing troves of patent data. And some metrics that suggest insight are relatively easy to calculate.

Take the allowance rate. You can compare an examiner’s total granted patents to the total applications and voila. In some circles a high allowance rate is a good thing for both examiner and applicant. Under this theory, an Examiner with a high allowance rate is reasonable and applicant-friendly. On the same token, a law firm with a high allowance rate is good. This theory also holds that an examiner or law firm is bad because of a low allowance rate.

Another metric is the speed of prosecution. Take the average time it takes to grant an application either in months, office actions, and/or RCEs. Under a theory on speed, an examiner or law firm with a faster time to allowance is better.

While these theories could be true in certain situations, there are confounding explanations that arrive at the opposite insight. In sum, these are woefully incomplete insights for three reasons.

First, these metrics incorrectly assume that all patent applications are of the same quality. By contrast, examiners are assigned patent applications in a non-random manner. The same applicant (because of the similarity of subject matter) will have a large proportion of applications assigned to a single Examiner or art unit. This means that related applications from the same applicant (drafted with the help of counsel) can have very high quality or very low quality applications. Related low-quality applications can suffer from the same problems (little inventiveness, lack of novelty, poorly drafted applications) that is not dependent on the examiner. So an examiner, through no fault of his own, can get assigned a high percentage of “bad” applications. In these cases, a low allowance rate should reflect on the applications–not the examiner.

Correspondingly, clients of some law firms instruct to prosecute patent applications in a non-random fashion. Especially for cost-sensitive clients, some firms are assigned very specialized subject matter to maximize efficiency. But not all subject matter is equally patentable. So even when a law firm has a low allowance rate, it often times does not mean that the firm is doing a bad job. By contrast, the firm could be doing a better-than-average job for the subject matter, especially in light of budget constraints given by the client.

This non-random distribution of patentable subject matter undermines use of a bare allowance rate or time to allowance.

Second, the allowance rate or allowance time metrics likely show a poor quality patent. That is, one job of patent counsel is to advocate for claim breadth. But examiners will propose narrowing amendments–even if not required by the patent laws–because it makes the examiner’s job easier and the examiner does not want subsequent quality issues. So often times, a quick and easy notice of allowance merely signifies a narrow and less valuable patent. Often times, office actions include improper rejections, so a metric that shows a quick compromise can show that the law firm isn’t sufficiently advocating. Thus, using these metrics to evaluate good law firms could be telling you the opposite, depending on your goals.

Plus, getting a patent for the client is not always best serving the client’s needs. Some clients do not want to pay an issue fee and subsequent maintenance fees on a patent that will never be used because it is so narrow and limiting. So it’s a mark of good counsel when such clients are not given a patent just because a grant is the destination. The client is ideally brought in to the business decision of how this patent will generate value. The counsel that does this, and has an allowance rate that drops because of this, unfairly is portrayed as bad.

Third, these metrics lack substantive analysis. Correlating what happens with patent applications only goes so far without knowing the points at issue that led to the particular outcomes. Some applicants delay prosecution for various reasons using various techniques including filing RCEs. There are legitimate strategies that are benefitting the client in doing so.

All this is not to say that such metrics are not useful. They are. These metrics simply need a lot of context for the true insight to come through.

In contrast to the above patent prosecution metrics, Anticipat’s appeal outcome very directly evaluates relevant parties in substantive ways. The appeal outcome metric reveals what happens when an applicant believes that his position is right and when the examiner and his supervisor think that he is right. In such cases, who is right? The parties are forced to resolve the dispute with an independent judge panel. And because getting to such a final decision requires significant time and resources (eg filing briefs and one or more appeal conferences that can kick out cases before reaching the board), the stakes are relativity high. This weeds out alternate explanations for an applicant choosing to pursue an appeal. Final decisions thus stem from a desire and position to win a point of contention that the Examiner thinks he’s right on–not a whimsical exercise. And tracking this provides insights into the reasonableness of Examiner rejections and how counsel advises and advocates.

The appeal metric helps to evaluate the Examiner because if overturned on certain grounds of rejection (say, more often than average), this means something about the examiner and his supervisor’s ability to apply and assess rejections. After working with an examiner, there can come a point when the examiner will not budge. The PTAB then becomes the judge as to whether the Examiner is right or wrong. This same analysis works at evaluating whole group art units or technology centers.

With Practitioner Analytics, you can see the reversal rates of specific Examiners, art units, tech centers, but you can also look at the specific arguments used that have been found to be persuasive at the Board. This means that if you are dealing with a specific point of contention, say “motivation to combine” issue of obviousness, you can pull up all the cases where the PTAB overturned your examiner based on this point. The overall reversal rate and specific rationales can both validate that the Examiner is not in line with the patent laws and rules.


The appeal metric also helps evaluate counsel because it shows the results of when counsel feels that their position is right even when the examiner will not budge. If counsel wins on appeal, this metric confirms the counsel’s judgment and their ability to make persuasive arguments in briefs.

Even a ninja can generate allowance rate metrics. But the savvy patent practitioner looks for more context to guide prosecution strategy. Insights from the data that are carefully analyzed avoid counter-intuitive explanations.

Which law firms are successful in overturning abstract idea rejections on appeal?

For difficult grounds of rejection, the right advocacy can make all the difference. The right counsel can know when to appeal and how to win on appeal. Here, we explore the demographic of firms that represent appellants that overturn one of the most difficult of all rejections: Section 101 abstract idea. Recent data show that while some big/specialized firms are successful, others without the same name recognition also are doing relatively well.

We have previously reported that in the post-Alice era, the PTAB reverses abstract idea rejections about 17% of the time. Updated for the past few months (blog post forthcoming), this overall rate has dipped. But this low percentage still represents a sizeable 135 decisions over the past year and a half (specifically, July 25, 2017 through December 1, 2017). This span of time represents the applications that are most likely to have been issued a post-Alice rejection and subsequently appealed. It turns out that select firms make up a good share of these successes, followed by a long tail of single reversals per firm.

Two firms immediately stand out from the pack: Morgan Lewis and Schwegman, Lundberg & Woessner. These firms each have 5 reversed decisions on appeal, a laudable number in this post-Alice climate.

Next are firms each having three reversals: Cuenot, Forsythe & Kim and Brinks Gilson and Lione (stemming from three related applications issued the same day).

Next come the firms/companies with two reversals apiece (arranged alphabetically):

Finally, there are 90 other firms/corporations/pro se applicants who have achieved one reversal (see end of post for names).

Our methodology involved using Anticipat Research database to find all reversed Section 101 decisions decided on abstract idea grounds. We then identified the law firm/corporation who signed the reply brief or appeal brief and crediting them with the reversal.

Two things are clear from these data: First, some firms are really quite good (relatively speaking) at appealing abstract idea rejections–even in the face of fast-evolving case law and various application filing dates (pre- and post-Alice). Make no mistake, a firm that can overturn an abstract idea rejection several times over the past year and a half is not doing so out of luck.

Additional research is needed into normalizing these data, which could identify whether these successful firms are achieving such high reversals by appealing much more frequently than others. However, Morgan Lewis and Schwegman would need to appeal quite a bit more than others for their high reversals to be explained only from sheer numbers of ex parte appeals.

Another related question relates to firms having smaller patent prosecution practices. Do these firms’ reversal numbers come in the face of handling a much smaller number of cases than bigger patent prosecution firms? This would indicate that smaller firms are in fact equally if not more successful than the Morgan Lewis and Schwegman types.

The second point that is clear is that a broad diversity of counsel succeeds in appealing abstract idea rejections. Counsel ranges from firms in big cities with the highest of billable rates to boutiques in smaller cities who carry much lower overhead. Solo practitioners also win and even pro se appellants succeed on Section 101 appeals. Common wisdom may suggest that you get what you pay for in patent counsel, and that a high stakes appeal of such a difficult rejection requires a Tom Brady billable rate. But the data show a broad range of counsel are successful, indicating that a superstar billable rate is not required to overturn an abstract idea rejection on appeal.

These two points strike at an interesting concluding point. While there are some certainties in navigating the abstract idea waters, there is a great deal of uncertainties. For the certainties, a certain level of legal sophistication and experience may be needed to successfully argue an abstract idea rejection at the PTAB (persuasive writing and rebutting the Examiner’s points with the right, relevant case law).

For the uncertainties, Section 101 case law has been evolving very regularly since Alice, meaning that there is a large amount of unpredictability and volatility. That being said, having the most up-to-date data available on Section 101 ex parte appeals can equip any counsel with the right tools. Anticipat Practitioner Analytics lets you see the reversal rates for certain Examiners, art units, and tech centers for all types of rejections. It also shows you the most often cited case law used by the PTAB for abstract idea grounds of rejection. Watch this YouTube video for more information or check out this page here: https://anticipat.com/accounts/signup/analytics/ Sign up now for a free trial.


As discussed above, here are the firms with one reversal, sorted alphabetically: