New timeline for sending Anticipat Recap emails: 14 days

Anticipat Recap is slightly changing its delivery timeframe. In an effort to balance delivering the freshest but most complete set of decisions, in the past Anticipat has delivered a recap of decisions after 8 days of a particular date’s decisions. However,certain times in the year the USPTO has delays in posting the decisions, which sneak in after the eight day window. The eight-day approach thus insufficiently captures all the decisions so we are changing the timeframe to 14 days.

Anticipat currently gets the bulk of its appeals decisions from a USPTO-powered efoia webpage. This webpage on an almost daily basis posts ex parte PTAB decisions after some document processing (including OCR’ing the PDFs and assigning limited metadata to the decisions). For the most part, it takes the USPTO several days to post these decisions after such processing.

Recently, we have concluded that sometimes, this 8-day window of time doesn’t cut it. Some of the slowest times of the year for the USPTO to post these decisions to the efoia page are at the end of fiscal quarters. The end of December is always especially slow. The last days of this past month, March, was also much slower than usual: only a fraction of decisions were available at the 8 day mark compared with the total number of decisions subsequently posted after the eighth day.

For those users who would prefer a shorter timeframe for delivering these Recap emails, feel free to take advantage of the Research page. The Research page is updated every day at 3PM EST and allows users to check for specific issues or the most recent decisions at any time. In fact, the Research page will have the exact same decisions that the USPTO’s webpage has within 24 hours. And this even includes our unique ground of rejection and outcome annotation.

As fresh as the Research page is, it doesn’t neatly organize the various issues and outcomes like the Recap email does. See below.

recap

Even though we’ve opted to prioritize (for the time being) slightly less freshness for completeness in these daily emails, we are working on a way that we can have both. We are currently developing our infrastructure to ingest these decisions that is not dependent on the e-foia webpage and that is much speedier. And we are also developing settings for users who want weekly recap emails instead of the daily emails.

Stay tuned for future developments. And if you have any feedback, please reach out to us.

How the biggest patent firms (Finnegan, Fish, Knobbe) do on appeal

We recently reported that the top patent firms (by registered practitioner as featured on a Patentlyo post) pursue ex parte appeals very differently. This, despite apparent equal knowledge of the benefits of pursuing an appeal to further prosecution. While this finding is interesting, pursuing an appeal and winning on appeal are two different things. Here we report on the differences in appeal outcomes along the three firms Finnegan, Fish & Richardson, and Knobbe Martens.

As brief background, we have found that average reversal rates among the various grounds of rejection to be quite stable. In a recent post, we reported that across the entire USPTO, § 101 has about a 20% reversal rate on appeal, §§ 102 and 112 hover at about 50%, and § 103 is around 33%. To look at these firms’ outcomes, we used Anticipat’s Research database and Practitioner Analytics between July 25, 2016 through March 12, 2018.

The firm-specific appeals outcome data

The above three firms exceed the USPTO reversal rates in almost all aspects. Finnegan first.

Ground of rejection Affirmed Affirmed in part Reversed Reverse Rate
 § 101 12 2 14%
 § 102 4 12 75%
 § 103 28 3 28 47%
 § 112 1 0 9 90%
OTDP 5 1 17%


For the 59 rejections of § 103 obviousness, 28 were wholly reversed. This is a complete reversal rate of 47%, much higher than the average rate. 

Of 16 decisions deciding § 102 anticipation, 12 were wholly reversed. This is a reversal rate of 75%, again much higher than the average rate.

The § 112 rejections were decisively overturned. Of 10 rejections, 9 were overturned. This translates into an 90% reversal rate–very high.

Section 101 was the one ground of rejection that underperformed the rest of the grounds of rejection. Of the 14 decisions deciding § 101 rejections, Finnegan had two reversed. This is a reversal rate of 14%, slightly lower than the average.

101

These data show that for the most part, Finnegan knows how to pick good candidates for appeal and/or how to advocate for a favorable outcome.

Next Fish. Of the firms analyzed today, Fish has the greatest tolerance for pursuing appeals. Because of this, the reversal rates for some rejections are lower. But this also means an overall greater number of reversals than the other firms.

Ground of rejection Affirmed Affirmed in part Reversed Reverse Rate
 § 101 13 2 13%
 § 102 6  2 16 66%
 § 103 56 14 50 43%
 § 112 10 8 44%
OTDP 13 1 7%

These data for Fish show slightly lower reversal rates than Finnegan. The reversal rate for § 102 is 66% which is still much higher than average.

fish102

Next, the reversal rate for § 103 is also higher than average at 46% wholly reversed.

fish103

Next, § 112 falls right under average with a 44% reverse rate.

Next, § 101 has two reversals. Given that there were 15 total decisions with § 101 rejections, this is a 13% reversal rate, which falls slightly below the average reversal rate for this rejection.

Fish’s appeal strategy more closely resembles the Michael Jordan quote: “You miss 100% of the shots you do not take.” Similar to basketball shots, the more a correspondent appeals, the lower the success rate may be. As we have previously reported, Fish has a much greater number of total appeals, implying it pursues appeals more frequently than the others. This may mean that the firm pursues appeals that it is not as confident that it will win on. But like basketball shooters, because of the volume of appeals, it means that more applications will be reversed and thus take advantage of the benefits of pursuing an appeal.

Fish should be recognized for taking more cases to appeal and still succeeding at or close to average reversal rates. If Fish wanted to identify areas for improvement, perhaps § 101 and § 112 would be on the list. While certainly not bad to overturn the Examiner’s 112 rejection 50% of the time, that is about the percentage of all appellants. Same is true for § 101, notwithstanding the challenges of appealing abstract idea rejections.

The third firm is Knobbe. Knobbe files the fewest number of appeals among the three firms. The data show very high reverse rates, but because of pursuing fewer appeals compared to the other firms discussed here, the result is fewer total reversals.

Ground of rejection Affirmed Affirmed in part Reversed Reverse Rate
 § 101 3 2 40%
 § 102 2 5 71%
 § 103 17 6 29 56%
 § 112 2 0 5 71%
OTDP 5 1 17%

For § 101 rejections, Knobbe has two reversals out of 5, meaning a relatively high reversal rate of 40%. It achieves the same number of reversals with much fewer appeals than Fish and Finnegan.

For § 102, 71% of rejections are wholly overturned. This is higher than the average even though not as high as Finnegan and Fish.

For § 103, 56% of rejections are wholly overturned. This is much higher than the other two firms. Factoring in the six affirmed-in-part rejections on § 103, the number of decisions where at least one claim was reversed is at 67%.

For § 112, 71% of rejections are overturned. This is much higher than average.

The data on Knobbe seem to show something of their appeal strategy. That is, Knobbe tends to appeal cases that they are more sure that they will win on. And it shows in the high reversal rates.

One note about all firms is the reversal rates for obviousness-type double patenting. All these rates are low, but many of the rejections were not argued on appeal. Thus, these summary affirmances imply that the applicant may opt to file a Terminal Disclaimer to overcome the rejection.

What it all means

These and other law firm-specific outcome data are important in two ways. First, having detailed outcomes for specific grounds of rejection can point out areas for improvement. If a firm is below the average reversal rate for a grounds of rejection (especially below the average for a particular examiner or art unit), there might be some learning opportunities to increase the reversal rate. For example, an area for improvement for Finnegan could be § 101. That being said, achieving two reversals in 15 attempts is not bad.

Anticipat Practitioner Analytics provided an easy search interface that can retrieve all reversed applications for a particular ground of rejection, for an Examiner, his art unit, or technology center. Further, it allows for any customer numbers to be queried if you want to competitively analyze certain filers.

header

The law firm or customer number analytics has its limitations. For example, just because a law firm has a low reversal rate for § 101 does not mean that they are of worse quality counsel than others. They may, for example, be taking on more difficult cases. But having the internal context of these cases can powerfully guide prosecution strategy.

The second way these data are important in helping to show the optimal level an applicant should appeal. Remember that how often to pursue an appeal affects the reversal rates. A firm such as Knobbe that is more selective in its pursuit of an appeal than a firm like Fish–all advocacy being equal–will have a higher reversal rate, but fewer reversed decisions.

For example, let’s say there are 100 candidate applications that could go up to appeal. For simplicity sake, assume each has a single pending ground of rejection. A Knobbe type of firm only pursues an appeal for its three most egregious cases and pursues claim amendments responses for the rest. In this situation, there’s a high chance these three cases get overturned (because they are the most improper of the bunch). Let’s say that two get wholly reversed, resulting in a 66% reversal rate.

But let’s say a Fish type of firm with the same candidate applications is less choosy and decides to appeal the 10 most egregious rejections. Some of these rejections will not be overturned because they are less clearly improper rejections. So this firm could have a lower reversal rate of 50%. But because the firm decided to appeal more applications, even with a lower reversal rate, that firm is still getting three more notices of allowance with all the benefits of an appeal.

In conclusion, feel free to lookup and monitor appeal decisions using customer numbers on Practitioner Analytics. Armed with the right material, you can see where you or others are strong and see where you have opportunities. This can also be used for showcasing expertise and advocacy in certain areas, e.g., for marketing.

Anticipat Education Part 5: How to find legal authority relevant for overturning specific rejections

Having the most relevant and current legal authority to advance a particular argument is a time-consuming task. But it is also a very important part of persuasive advocacy in patent prosecution. With Anticipat, we channel PTAB decisions to do much of this work. Anticipat Analytics shows you relevant legal authority (e.g., case law, Guidelines, and MPEP support) that the Board relies on in overturning specific rejections.

The legal authority that the Board relies on to overturn a particular rejection in a given tech center can relevant for responding to a pending Office Action. If it worked on appeal in a related case, that authority might be good to look at in your case. And it isn’t only relevant for a case that you are appealing–it can be persuasive when working with your Examiner.

To understand how to use legal authority on Anticipat Analytics, a little bit of background first. Legal authority goes hand in hand with the corresponding argument. We previously explained how Anticipat Analytics breaks down the most common arguments used by the Board in overturning a particular rejection. The legal authority goes a step further by showing the legal authority relied on in making this particular reversal.

Let’s take as an example an application that has an abstract idea rejection under Section 101 – Patent-ineligible subject matter. Looking up this application in the input field, the abstract idea tags listed are “Step 1”, followed by “Step 2,” then followed by “prima facie case.”

The most common tag for abstract idea is “Abstract Idea – Step 1.” This means that when the Board is reversing abstract idea rejections, most often it does so in step 1 of the Mayo/Alice framework. Within step 1, which legal authority does the Board cite to support its reversal? This is where the legal icon to the immediate right of the tag comes into play.

The Examiner column shown above has the most relevant tags and authority to an application (e.g., they represent Board decisions that have overturned this specific Examiner). But they also have the smallest sample size. The next column, art unit, has more decisions. It provides all the decisions (and legal authority) for this art unit.

The next column, tech center, has even more decisions and legal authority.

Finally, the right-most column displays all the tags for this specific tag. This will provide all the legal authority Board decisions.

This legal authority icon immediately to the right of the abstract idea text is clickable, which reveals the full list of legal authority for step 1 in this tech center. When the legal authority icon is clicked, here is an image of the popup that shows.

Having the relevant legal authority for specific arguments used by judges can help guide strategy in what is persuasive.

Please reach out to us with any questions or comments. We’re standing by for any questions or comments.

Anticipat Team

Anticipat Education Part 2: How to use Anticipat Research database

As a patent practitioner, you see different grounds of rejection all the time. Let’s imagine that one Office Action from a particular Examiner has a rejection that you think is bogus. It’s very likely that this Examiner has made similar types of rejections in other cases. Imagine if you could easily see the instances where the authoritative body for this particular Examiner (the PTAB) told the Examiner that this exact ground of rejection was indeed wrong. This information could be very valuable because the Board’s reasoning can be insightful to other cases that have similar rejections.

With Anticipat Research you can look at Examiner’s appeal history in this very way. Our unique methodology that annotates each appeal decision by ground of rejection and outcome allows for easy identification of specific grounds of rejection.

Let’s say you want to see how often your Examiner was reversed on “101 – Patent-ineligible subject matter.” Simply select this ground of rejection filter in the issue section, then input your examiner’s last name. Here, we insert the last name “Hoffman,” select “Reversed” in the outcome section and hit Apply Filters. See below image.

As can be seen, the “Recap” column gives you a high level overview of what the Board decided in each of the displayed decisions.

To broaden the search further from the above example, you may consider selecting  the outcome “Affirmed-in-part” in addition to “Reversed”. By selecting “Reversed” and “Affirmed-in-part”, you will see decisions where the Board reversed all claims and where the Board reversed some claims, but affirmed others.

You can use Anticipat Research for more general information. Say you want to look at all the recent Section 101 – Patent-ineligible subject matter decisions that are reversed. Simply input a date range, select “Section 101 – Patent-ineligible subject matter” and hit Apply Filters. You can then add columns to your results. Say you want to see the art units where these reversed grounds are coming from. Simply select art unit. And the column appears.

With Anticipat Research, you can discover trends and patterns that can concretely help you in your patent prosecution practice.

Feel free to give Research a 14-day free trial.

Let us know if you have any questions or if you would like a demo.

Anticipat Team

Guide your patent prosecution strategy with Anticipat

We at Anticipat are excited to announce a new product called Practitioner Analytics. The tool helps practitioners use what is found to be successful on appeal at the Board in all aspects of routine patent prosecution. But before we explain the tool, we touch on some present realities of a patent practitioner responding to an Office Action.

Status Quo
As a patent professional, you may spend a lot of time reviewing Office Actions and determining response strategies. You may wade through each Office Action rejection-by-rejection. The complexities of patent law make this process difficult and time-consuming.

The gut feeling is a powerful way for the practitioner to approach each rejection. Maybe for one rejection, based on your experience and/or knowledge of the patent laws, your gut feeling tells you that the Examiner brings up a good point and you consider amending the claims. For another rejection, based on this same experience and knowledge, you see that a rejection is unreasonable so you consider traversing the rejection without amending the claims. For other rejections, you may initially not know how to proceed due to a lack of experience or up-to-date knowledge of the rejection.

So a practitioner’s gut feeling can guide the strategy in responding to the Office Action only so far, especially with constant developments in the law. In addition to being inefficient, there’s always a chance that the practitioner’s own experience is incomplete. Plus, this whole process can be difficult to gauge the strength of your strategy.

Furthermore, the client‘s preferences can make the strategy even more complex, necessitating diving into seldom explored areas of patent law. For example, the client may be intent on maintaining a certain claim breadth to safeguard entrants into the market or to cover a competitor product, which makes the patent prosecution strategy more difficult. Hence you may have to rely on a less persuasive strategy in overcoming a particular rejection.

With all the complexities that go into patent law, do you ever feel like there must be a better way to keep current on response strategies in a more efficient, fact-based way?

PTAB Data
Luckily, there is a large body of appeals decisions at the PTAB where judges routinely overturn Examiner rejections. The judges apply the rules and laws using the same arguments and legal support that Applicants can use to overcome rejections in responding to Office Actions. If an argument works before the Board, that argument has high odds of ultimately winning out. So in a way the Board weeds through much of the possible argumentation and distills the arguments effective in overcoming all kinds of rejections. And because of the sheer volume of appeals decisions, these decisions include rationales for overcoming practically every ground of rejection. Plus, because the decisions are authored by independent judges at the PTO, they are an accurate reflection of the standards and arguments used to scrutinize both Examiner and Appellant arguments.

The only problem is that these decisions are posted in bulk form with minimal search capabilities, the content of each decision is disorganized, and manually wading through the decisions is horrific information overload.

Also, the USPTO overly simplifies decision outcomes, which does not tell you very much about what happened in any given appeal decision. So how do you make use of the data in the thousands of appeals decisions that issue every year?

Solution: Anticipat Practitioner Analytics

Anticipat Practitioner Analytics provides more than statistics. It is a PTAB legal research tool that can quickly get you helpful fact-based information about arguments and strategy you can use for a specific application. How does it do this?

Practitioner Analytics powerfully and efficiently guides prosecution strategy. By inputting an application number into the Analytics search engine, the page returns lists of decisions where the Board reversed for various possible rejections.

This can help practitioners in three important areas

Area 1: Organize persuasive arguments
Practitioner Analytics organizes rationales that the Board uses in reversing an Examiner’s ground of rejection. It does so by aggregating reversal rationales at the Board by each organizational level in the Office (Examiner, art unit, tech center). The specific legal rationales argued before the Board in each of these organization levels is listed underneath a bar chart showing real reversal rates at each level. At the click of a mouse the practitioner can select the legal issue in their specific case and see how it was treated in Board decisions coming from the Examiner involved, the Examiner’s Art Unit, the Tech Center, and then across the entire USPTO. The Practitioner can then compare the facts in their case to those cases in a list of decided appeals cases where this issue was involved to further predict the outcome before the PTAB.

Practitioner Analytics improves the caliber of argumentation and saves time in legal research by organizing and ranking persuasive reversal rationales for each Examiner, art unit, tech center, and global USPTO levels for each ground of rejection.

Area 2: Assess strength of rejections
Appellants typically won’t spend the time and money on a full appeal if they’re not sure of their position. Similarly, weak Examiner positions tend to get weeded out by the preappeal conference and appeal conference. So the appeal decision is actually a good objective data point for what kinds of rejections the Examiner corps is not incentivized to back down from but still will lose at the Board. This information is invaluable when deciding whether to pursue an appeal or not.

Anticipat provides you with the percentage of reversed decisions at each level (Examiner, art unit, tech center, USPTO). The higher the reversal rate, the less reasonable the Examiner’s rejection.

This reversal rate information enhances a professional’s anecdotal experience by identifying anomalies in how a particular ground of rejection’s reversal rate at the Board compares to other groups. This can guide a practitioner’s strategy in responding to Office Action rejections. That is, knowing how this particular Examiner or art unit’s reversibility rate compares with other groups can suggest when to hold firm to a position. For practitioner’s with relatively little appeals experience in a particular technology, this data instantly tells you what is working and what is not, without having had to spend years learning in the School of Hard Knocks.

Area 3: Get favorable case law straight from the Board
Practitioner Analytics also stores the legal support cited by the Board in each particular decision for each legal issue (tag) identified.

This means that in the aggregate, Practitioner Analytics provides the case law/MPEP/guidelines relied on to reverse or affirm the Examiner for each particular rationale at a mouse click, allowing you to keep current on relevant case law now being used by the Bard and identify trends in persuasive legal authority used specific to the rejections in a specific case.

Conclusion
With Practitioner Analytics, you can use successful approaches at the Board in your own practice without having to wait decades to gain experience.
Practitioner Analytics empowers you with knowledge about the strength of rejections at examiner, art unit and tech center levels
Practitioner Analytics provides a simple and intuitive interface so that you can quickly identify successful reversal rationales for examiner, art unit and tech center specific information
Pracitioner Analytics keeps you up to date on specifically tagged legal issues referencing the case law the board itself uses on that issue.
Anticipat Analytics enhances your ability to provide quality and cost-effective advocacy, saving you countless hours in legal research.  Right now, try it with unlimited access for free for two weeks.

Introducing Rejection Tags: A Way to Use Rationales and Types of Rejections for Patent Prosecution

In 1753, Swedish botanist Carl Linnaeus introduced a system for classifying plants. His two-term classification system assigned each organism a first generic name and a second, more specific name (e.g., Homo sapiens for humans). This system was different than previous classifications, but not extraordinary. But the elegance and simplicity of his system truly was groundbreaking, paving the way for all living organisms to be systematically and uniformly classified.

As the father of modern taxonomy, Linnaeus would be stunned to see how far technology has taken classification. Entire industries have been revolutionized through improved classifying of big data. And patent prosecution is no exception. We at Anticipat are excited to introduce “tags” on the Research Database for all grounds of rejections as a way of classifying patent prosecution rationales.

If the Board decision equivalent of a genus is a ground of rejection, then the species is the tag. Anticipat Research Database has long included the more generic grounds of rejection for each appealed decision (e.g., §101, §102, §103, §112, OTDP, etc.) But because of the wide-ranging categories of common reasons why a rejection is reversed or affirmed, it becomes helpful to dig deeper than cataloging the ground of rejection.  This deeper level represent the various possible points of contention. Identifying these specific categories allows you too find those specific decisions that are relevant to a certain issue without drowning in the ocean of Board decisions.

In short, a tag is a brief summary of a more granular point of contention regarding the ground of rejection. Assume that an applicant and Examiner are at odds on a ground of rejection. Depending on the rejection, this disagreement can take place over a number of finite points. Now with tags, you can easily look up the various points of contention for each ground of rejection. In other words, if you believe that a particular obviousness argument is worth pursuing, you can find decisions where the Board reversed an Examiner using that very same argument. Or if you don’t know which argument is worth pursuing, you can quickly find those arguments that have been most successful in reversing the Examiner. Here are examples of the ground of rejection/tag classification system.

  • 101 – nonstatutory subject matter

Some of the tags for the ground of rejection “§101 – nonstatutory subject matter” include:

  • Software/Data per se
  • Abstract Idea (prima facie case, step one, step two)
  • Law of Nature (prima facie case, step one, step two)
  • Naturally-Occurring Phenomenon (step one, step two)

Finding all the decisions with a particular ground of rejection is just the first step. Even more useful is weeding out less relevant decisions that fall within the same ground of rejection category. Take the abstract idea rejection. There are many different types of points of contention within 101 nonstatutory rejections that are less relevant to abstract idea: computer readable medium comprises a signal, software per se, combining multiple classes, law of nature, naturally occurring phenomenon, claiming a human, etc. Even within abstract idea, there are multiple points of contention such as 1) prima facie case (that the Examiner did not even do the minimum job in identifying and/or rejecting the claim as an abstract idea), 2) step 1 and 3) step 2 of the Mayo/Alice framework. Since we at Anticipat track all of these subcategories for you, you can look up in seconds to find decisions with your desired point of contention.

Obviousness

The ground of obviousness under 35 U.S.C. 103(a) includes our most advanced set of tags. This makes sense as obviousness is one of the most nuanced and developed ground of rejection. We keep track of over 20 points of contention within obviousness, such as the following:

  • Scope and Content of Prior Art – Broadest Reasonable Interpretation
  • Examiner Bears Initial Burden (Prima Facie Case)
  • Clear and Factually-Supported Articulation of Reasons for Obviousness (Prima Facie Case)
  • Hindsight Reasoning (Prima Facie Case)
  • Secondary Considerations
  • Combining/Substituting prior art elements according to known methods to yield predictable results
  • Use of known technique to improve similar devices (methods, or products) in the same way
  • Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results
  • “Obvious to try” – choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success
  • Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one
  • Proposed modification cannot render the prior art unsatisfactory for its intended purpose
  • Teaching away
  • Non-Analogous Art

The challenge with trying to find relevant arguments to overcoming an obviousness rejection, for example, is that any case decided by the Board could include one or more of these obviousness points. With the volume of obviousness decisions, it has been impractical to find relevant decisions.

The point of tags

You can do a lot of interesting things with tags. Take for example if you are stuck on a particular rationale used by an Examiner that you think is unreasonable. You can easily match your issue with decided cases at the Board and use Anticipat to quickly pull the decisions where the Board agreed with you.  If there are very few decisions on your side, that is a valuable reality check.  If there are many decisions on your side, you can review decisions to double check that your facts correspond with those in the decisions. You can also use the legal authority relied on by the Board for this particular point of contention in persuading your Examiner.

Tags will be incorporated into our soon-to-be-released Practitioner Analytics page to guide prosecution strategy. Using the Practitioner Analytics interface, tags can be ranked in the order of frequency in overturning an Examiner’s particular rejection. You can thus find better arguments faster and with more confidence. Sign up for an invitation to the soon-to-be-released Practitioner Analytics page.

Board decisions show that independent judges have agreed with the applicant’s position in a related case.  They are a powerful way to check and augment your existing experience.

Conclusion

You don’t have to be a famous botanist to appreciate how identifying board decisions using a rejection/tag relationship is a simple but powerful way of describing how the Board decides cases today. In the aggregate, this structure provides targeted information to inform your patent prosecution strategy. At about $1 per day, Anticipat Research Database is not only incredibly affordable, at current hourly billing rates it pays for itself in just seconds a day. Try it now for free.