Anticipat blog recognized as top 100 IP blog

After a year and a half of posting, this blog is starting to get recognized. In addition to traffic growth, Anticipat blog has been selected as one of the Top 100 Intellectual Property Blogs on the web by Feedspot. See https://blog.feedspot.com/intellectual_property_blogs/

Feedspot’s top 100 list for IP blogs is a very comprehensive list of Top 100 Intellectual Property Blogs on the internet. Anticipat comes in at #74, at a rate of about a blog post a week. 

This list highlights that there are many good IP blogs to follow. In fact, we include a section in the right sidebar under BLOGS TO FOLLOW with a short list of some of these IP blogs to follow. 

Stay tuned for many more interesting and relevant posts. We will continue providing content to be practical to the patent prosecutor. 

Berkheimer’s biggest effect on PTAB outcomes

After Federal Circuit decisions Berkheimer and Aatrix held that abstract idea inquiries required a factual finding, this blog predicted that the number of abstract idea rejection reversals at the Board would dramatically increase. The logic being that many examiners did not perform the rigorous analysis seemingly required for Step 2 of the Alice/Mayo framework in light of these Federal Circuit decisions. It has now been over 4 months since Berkheimer was decided and we can safely say that this dramatic change has not yet happened.

In fact, take the decisions involving abstract idea rejections in the year leading up to Berkheimer (March 1, 2017-March 1, 2018, accounting for some lag time after Berkheimer was decided). In this year, the 107 reversals over the 901 total decisions yields a reversal rate of 11.9%. Post Berkheimer (from March 1, 2018 to present), there have been 54 reversals over 525 total decisions. This reversal rate is 10.3%–lower than the pre-Berkheimer range. (All these decisions can be found using the Anticipat Research page)

But there has been something that has changed: the way that the Board performs its analysis. Historically, the Board has favored relying on step 1 of the Alice/Mayo framework by almost a 3:2 ratio. That is, for every three decisions where the panel reversed because the claims were not directed to an abstract idea under step 1, there were two decisions where the panel reversed because the claims recited something more than the asserted abstract idea under step 2. From March 1, 2017 to March 1, 2018, there were 63 step 1 analyses versus 35 step 2 analyses.

An important piece of the step 2 analysis is whether the claims recite an element or combination of elements that are more than well-understood, routine, or conventional elements. As the Federal Circuit has made clear, this inquiry is an element of fact. And Examiners have been charged in the recent Berkheimer memo to carefully apply factual bases to this step. So it would make sense that the Board, in light of Berkheimer, is reversing under this step more frequently than before.

And that is exactly what is happening. Recently, the two steps are equally used in reversing abstract idea rejections. From March 30, 2018 to present, the Board has used step 1 analysis to reverse 16 times while using step 2 an almost equal 15 times. This means that the Board has become much more receptive to arguments of step 2 in light of Berkheimer–even if the reversal rates have not noticeably increased.

The lower post-Berkheimer reversal rate is not easy to explain. It may stem from a confluence of factors. Perhaps applicants are getting more aggressive and appealing claims that are less patent-eligible. It could be that some applications that would otherwise be reversed if appealed all the way to a final decision are getting allowed at the appeal conference stage or earlier.

Regardless of the success rates of appealing abstract idea rejections, the change in the way that the board reversed these decisions highlights a good practice tip. Focus appeal argumentation on step 2 where appropriate. For those appeals that have long since been forwarded to the appeal, it is not too late to bring this up to the panel by way of an oral hearing. During the oral hearing, explaining the step 2 piece could be helpful to the panel, assuming that some sort of step 2 argument has been made. And of course when interviewing and otherwise responding to Office Actions in response to abstract idea rejections, step 2 has become a powerful step.

Understanding the Examiner Answer: analyze anything new and contest as needed

The Examiner Answer can be a very important stage of the ex parte appeal process. It is at this stage that Examiners may want to make up for weak Office Action positions and set themselves up for getting affirmed at the Board. Understanding the Examiner incentives and tactical options, however, can give the patent practitioner the upper hand.

The Examiner Answer is technically optional (“The primary examiner may, within such time as may be directed by the Director, furnish a written answer to the appeal brief.” 37 CFR 41.39). Examiners usually prepare them because of the disposal credits that they receive. Outside of this most obvious incentive, Examiners also have an opportunity to present their case most favorably to the Board panel that will decide the case. Sometimes the analysis in the Answer can improperly go out of bounds. Since an appellant only gets 60 days to respond to an Examiner Answer (no extensions), a timely assessment of the Examiner Answer is critical. examineranswertimeline

Timeline image attributed to Fitch, Even, Tabin & Flannery LLP

One way that Examiners cure deficiencies in their pending rejections is by creeping their arguments closer to the definition of a new ground of rejection (new analysis or evidence described below). It requires a technology center signature to introduce a new rejection at the Examiner Answer stage, which makes formal new rejections unattractive to Examiners. We recently pointed out a growing awareness in the patent bar of improper Examiner Answers that include a new rejection without designating it as new.

There’s a good option for combating such an improper new ground of rejection that has not been so designated: file a Petition to designate the rejection as new and reopen prosecution. 1207.03(b). This petition tolls the 60 day window to respond to the Examiner’s Answer so that a resolution will not affect the ability to proceed with an appeal.

Another technique was very recently blessed by the Federal Circuit in In re Durance, 2017-1486 (Jun 1, 2018). This technique includes directly arguing against an improper new rejection in the Reply Brief–even if this appellant argument had never been argued before. In Durance, the Examiner in the Examiner Answer argued for the first time that there was no structural difference between the claimed invention and the combined teachings of the prior art references, citing a structural identity argument. The appellant challenged the structural-identity rejection in their reply brief, but it was not considered by the Board because of alleged waiver. This was error, the Federal Circuit held.

The Reply Brief, like the Examiner Answer, is also optional. There is not good data on whether filing a reply brief helps improve the chance of succeeding on appeal (although Anticipat is currently ingesting prosecution data to show this). But filing a Reply Brief can help the Board see defects in the Examiner’s case. This includes improper new rejections.

Whether an Examiner Answer includes an improper new rejection depends very much on the facts. The MPEP outlines what constitutes a new ground of rejection. See MPEP 1207.03(a). Here are factual factors for constituting a new ground of rejection.

  • Changing the statutory basis of rejection from 35 U.S.C. 102 to 35 U.S.C. 103.
  • Changing the statutory basis of rejection from 35 U.S.C. 103 to 35 U.S.C. 102, based on a different teaching.
  • Citing new calculations in support of overlapping ranges.
  • Citing new structure in support of structural obviousness.
  • Pointing to a different portion of the claim to maintain a “new matter” rejection.

In contrast to the above factual situations, no new ground of rejection issue arises when the basic thrust of the rejection remains the same, i.e., an appellant has been given a fair opportunity to react to the rejection. See In re Kronig, 539 F.2d 1300, 1302-03, 190 USPQ 425, 426-27 (CCPA 1976). Here are factual factors for the rejection not constituting a new ground of rejection.

  • Citing a different portion of a reference to elaborate upon that which has been cited previously.
  • Changing the statutory basis of rejection from 35 U.S.C. 103 to 35 U.S.C. 102, but relying on the same teachings.
  • Relying on fewer than all references in support of a 35 U.S.C. 103 rejection, but relying on the same teachings.
  • Changing the order of references in the statement of rejection, but relying on the same teachings of those references.
  • Considering, in order to respond to applicant’s arguments, other portions of a reference submitted by the applicant.

A good practice tip is to pay careful attention to arguments in the Examiner Answer. If they are different enough from the arguments on the record, consider pointing this out either in a petition or in a Reply Brief, as appropriate.

New USPTO subject matter memo in light of Vanda Pharms. v. West-Ward

After a relatively long break of over a year from issuing subject matter eligibility memos after Amdocs, the USPTO looks like it’s back on track. This year has already seen the Berkheimer memo. Now its latest memo from Bob Bahr was issued yesterday June 7, 2018 in light of Vanda Pharms. v. West-Ward Pharms (Fed. Cir. Apt. 13, 2018). 

The memo includes guidance to examiners for examining diagnostics applications under Section 101. The memo is nothing revolutionary for examining under the Mayo/Alice framework as it closely follows the Vanda holding. But the memo does clarify that examiners should consider an “application” of a natural relationship as satisfying step 2A of the Alice test, without having to go to the “routine, conventional, and well-known” analysis of Step 2B. Specifically, the memo states that practically applying a natural relationship should be considered eligible–it is not necessary for such method of treatment claims to include non-routine or unconventional steps. 

The recent memos offer some hope that USPTO will continue to improve the predictability of applying Section 101 rejections.

Board panel citing Berkheimer to reverse judicial exception rejection to diagnostics claims: no evidence

Since the two weeks since we predicted that the PTAB would start to dramatically change its outcomes of rejections under Section 101, we have seen no such change. Since then, recap emails have mostly shown affirmances (only 7 reversals of 86 total Section 101 decisions = 8% reversal rate). But a decision in yesterday’s recap email shows precisely the kind of rejection analysis that is expected to become more mainstream at the PTAB.

Ex Parte Galloway et al (PTAB May 22, 2018) reversed the judicial exception rejection under Section 101 because of a lack of evidence. The panel, consisting of Donald E. Adams, Demetra J. Mills, and Ulrike W. Jenks, found that the Examiner had not provided evidence to support a prima facie case of patent ineligible subject matter.

The panel cited to Berkheimer in support of an apparent defective step 2 analysis: “The Examiner has not established with appropriate factual evidence that the claimed method uses conventional cell counting methods.”

As a stylistic aside, Section 101 rejections are typically presented in decisions toward the very top of the document. It is unclear how or why (it may stem from examiners or practitioners ordering the statutory rejections), but this practice has gone on in the Board’s decisions for several years. However, a recent trend is for the Board to analyze Section 101 after prior art rejections. Now it makes sense why because a lack of a good prior art rejection can make for a good support that step 2 of a Section 101 rejection is improper.

And that is precisely what happened here. The panel proceeded to support its assertion (that step 2 of the Alice/Mayo framework was defective) by referring to its obviousness reversal. In other words, the Board’s finding of non-obvious claims supported that the claim features were not simply conventional or known in the art.

Another interesting point to note about this case is that it reinforces the much higher reversal rates of Section 101 judicial exceptions. The Board’s practice, as in this case, appears to be helping the patent-eligibility of diagnostics inventions.

As the PTAB becomes more confident in using Berkheimer in their decisions, expect more of the same analysis as Ex Parte Galloway. The appeal backlog has far too many cases where the Examiners did not have the guidance of Berkheimer to establish the proper evidence for Step 2. Thus, the necessary analysis from the Board need only be short and crisp.

Tech center directors currently use their own appeal metrics for assessing examiners, but should use Anticipat data instead

The USPTO has a vested interest in knowing how well its patent examiners examine applications. It tracks production, efficiency and quality. Even though quality examination has always been tricky to measure, one metric comes pretty close: an examiner’s appeal track record. And while tech center directors have had access to this data, until recently this has been difficult to access. Here we explore the known gaps of how this metric is being used at the USPTO.

According to sources at the USPTO, directors–who oversee each technology center–have access to their Examiners’ appeal track records. The more an Examiner gets affirmed by the PTAB on appeal, the more reasonable the Examiner’s rejections, the theory goes. This means that directors can evaluate examiners based on how often an examiner gets affirmed.

The acceptable examiner appeal track record appears to depend on the director. An Examiner’s appeal track record with an affirmance rate significantly below the director’s average will attract attention. The USPTO as a whole has an affirmance rate at the PTAB that hovers around 60%. Different art unit groupings vary significantly from this global affirmance rate. Anything consistently lower than an affirmance rate average can put a question mark on the Examiner’s examination quality.

Even without knowing the specific contours of the acceptable affirmance rate at the USPTO, a look at the numbers can give an Examiner a general idea how well he/she is doing. This can help an Examiner proactively find out about these metrics before getting into trouble to guide his/her appeal forwarding strategy (Full disclosure: As a quality control metric, examiners do not appear to get punished in any way for being reversed).

While the USPTO’s appeal outcomes are available from other patent analytics services, they only use the USPTO’s outcomes that are based on how the overall decision was decided. See below.

appealrate

This decision-based outcome doesn’t communicate the issues that are causing examination problems (which issues are being reversed at the Board). By contrast, Anticipat provides a detailed breakdown of all an Examiner’s decisions. Examiners can thus easily pull up all of their appealed decisions and quickly see on which issues they were affirmed/reversed/affirmed-in-part.

On top of Examiner-specific data, Anticipat can identify rejections reversal rate outcomes across art units. For example, take obviousness rejections. Using Anticipat’s Analytics page for looking up over the past couple years in art unit 2172, in the computer/electrical arts, the pure reversal rate is about 18%. See blue sections of graph. This is lower than the tech center reversal rate of 27% and lower than the global USPTO reversal rate for this time. 2172

On the other hand, art unit 1631 in the biotech arts has a much higher reversal rate with a decision pool of about the same number. Specifically, art unit 1631 has a reversal rate of 43% for the past couple years. This is greater than its tech center reversal rate in 1600 of 26%.

1631

Finally, art unit 3721 in the mechanical art has an obviousness reversal rate much higher than both of the above examples. Specifically, 3721 is wholly reversed during the past couple years at 53%. This is higher than the tech center reversal rate of 44%, which is in turn higher than the global USPTO level. 3721

The granularity of appeal data can show what currently available data for appeals does not show: whether an Examiner is substantively doing a good job of examining applications. There are three reasons this is important for meaningful analysis of the metric.

First, as we’ve previously reported, the USPTO labels a decision as affirmed if only one rejection sticks to all pending claims. So the USPTO/director statistics and other patent analytics websites that provide this statistic of affirmance rate lacks the proper context. And without such context, the appeal outcome is an incomplete and even misleading metric.

Second, not all of the responsibility for low affirmance rates falls on the Examiner. For example, the two other conferees at the appeal conference can push applications to the Board that don’t have good rejections. But the Examiner-specific metric is a good starting point for any deviations to any norms. Anticipat allows for other Examiner lookups (including the SPE) to determine conferee track records.

A third reason for variance in Examiner appeal outcomes stems from the judges’ label of the outcome. While it is somewhat rare for a decision to have a newly designated rejection, it does happen. And as Bill Smith and Allen Sokal have recently pointed out in a Law360 article, decisions that have new designations are inconsistently labeled as affirmed or reversed. Sometimes the panel will reverse the Examiner’s improper rejection, but introduce a new rejection on that same ground with their own analysis. Other times the panel will affirm the Examiner’s improper rejection with its own analysis and be forced to designate the rejection as new. These small differences in patent judges’ preferences can impact an Examiner’s appeal profile.

Anticipat makes up for these shortcomings by providing greater context to outcomes and grounds of rejection. You can look at judge rulings in a particular tech center and identify patterns. For example, you can see whether panels tend to reverse or affirm when introducing new rejections.

Other valuable information, such as art unit and tech center data, can predictively guide an Examiner’s chances of taking an appeal to the Board. If a particular rejection consistently gets reversed on appeal, at the pre-appeal conference or appeal conference this knowledge can guide strategy to forward to Board based on the specific rejection at hand. Especially if the appeal consists of only a single issue.

With this increased granularity in appeal data there are only more questions to be answered. These specific questions currently have less clear answers. For example, to what extent are greatly disparate appeal outcomes the result of differing quality of examination? To what extent are Examiners across different tech centers evaluated based on appeal outcomes? Is there a point that an Examiner is considered to need improvement based on appeals outcomes? Could appeal outcomes–even if they include many reversals–affect the Examiner’s performance review? Likely not. But a better lens could prompt questions about disparate reversal rates across art units.