Anticipat blog recognized as top 100 IP blog

After a year and a half of posting, this blog is starting to get recognized. In addition to traffic growth, Anticipat blog has been selected as one of the Top 100 Intellectual Property Blogs on the web by Feedspot. See https://blog.feedspot.com/intellectual_property_blogs/

Feedspot’s top 100 list for IP blogs is a very comprehensive list of Top 100 Intellectual Property Blogs on the internet. Anticipat comes in at #74, at a rate of about a blog post a week. 

This list highlights that there are many good IP blogs to follow. In fact, we include a section in the right sidebar under BLOGS TO FOLLOW with a short list of some of these IP blogs to follow. 

Stay tuned for many more interesting and relevant posts. We will continue providing content to be practical to the patent prosecutor. 

The appeal outcome is one of the most telling metrics for patent prosecution analytics. Here’s why

Big data is slated to revolutionize all aspects society, and patent prosecution is no exception. But because of the complexity of patent prosecution, insights from big data must be carefully considered. Here we look at why appeals outcomes are one of the most telling metrics: it shows good insights with few misleading explanations.

Because much of patent data is publicly available, some companies are amassing troves of patent data. And some metrics that suggest insight are relatively easy to calculate.

Take the allowance rate. You can compare an examiner’s total granted patents to the total applications and voila. In some circles a high allowance rate is a good thing for both examiner and applicant. Under this theory, an Examiner with a high allowance rate is reasonable and applicant-friendly. On the same token, a law firm with a high allowance rate is good. This theory also holds that an examiner or law firm is bad because of a low allowance rate.

Another metric is the speed of prosecution. Take the average time it takes to grant an application either in months, office actions, and/or RCEs. Under a theory on speed, an examiner or law firm with a faster time to allowance is better.

While these theories could be true in certain situations, there are confounding explanations that arrive at the opposite insight. In sum, these are woefully incomplete insights for three reasons.

First, these metrics incorrectly assume that all patent applications are of the same quality. By contrast, examiners are assigned patent applications in a non-random manner. The same applicant (because of the similarity of subject matter) will have a large proportion of applications assigned to a single Examiner or art unit. This means that related applications from the same applicant (drafted with the help of counsel) can have very high quality or very low quality applications. Related low-quality applications can suffer from the same problems (little inventiveness, lack of novelty, poorly drafted applications) that is not dependent on the examiner. So an examiner, through no fault of his own, can get assigned a high percentage of “bad” applications. In these cases, a low allowance rate should reflect on the applications–not the examiner.

Correspondingly, clients of some law firms instruct to prosecute patent applications in a non-random fashion. Especially for cost-sensitive clients, some firms are assigned very specialized subject matter to maximize efficiency. But not all subject matter is equally patentable. So even when a law firm has a low allowance rate, it often times does not mean that the firm is doing a bad job. By contrast, the firm could be doing a better-than-average job for the subject matter, especially in light of budget constraints given by the client.

This non-random distribution of patentable subject matter undermines use of a bare allowance rate or time to allowance.

Second, the allowance rate or allowance time metrics likely show a poor quality patent. That is, one job of patent counsel is to advocate for claim breadth. But examiners will propose narrowing amendments–even if not required by the patent laws–because it makes the examiner’s job easier and the examiner does not want subsequent quality issues. So often times, a quick and easy notice of allowance merely signifies a narrow and less valuable patent. Often times, office actions include improper rejections, so a metric that shows a quick compromise can show that the law firm isn’t sufficiently advocating. Thus, using these metrics to evaluate good law firms could be telling you the opposite, depending on your goals.

Plus, getting a patent for the client is not always best serving the client’s needs. Some clients do not want to pay an issue fee and subsequent maintenance fees on a patent that will never be used because it is so narrow and limiting. So it’s a mark of good counsel when such clients are not given a patent just because a grant is the destination. The client is ideally brought in to the business decision of how this patent will generate value. The counsel that does this, and has an allowance rate that drops because of this, unfairly is portrayed as bad.

Third, these metrics lack substantive analysis. Correlating what happens with patent applications only goes so far without knowing the points at issue that led to the particular outcomes. Some applicants delay prosecution for various reasons using various techniques including filing RCEs. There are legitimate strategies that are benefitting the client in doing so.

All this is not to say that such metrics are not useful. They are. These metrics simply need a lot of context for the true insight to come through.

In contrast to the above patent prosecution metrics, Anticipat’s appeal outcome very directly evaluates relevant parties in substantive ways. The appeal outcome metric reveals what happens when an applicant believes that his position is right and when the examiner and his supervisor think that he is right. In such cases, who is right? The parties are forced to resolve the dispute with an independent judge panel. And because getting to such a final decision requires significant time and resources (eg filing briefs and one or more appeal conferences that can kick out cases before reaching the board), the stakes are relativity high. This weeds out alternate explanations for an applicant choosing to pursue an appeal. Final decisions thus stem from a desire and position to win a point of contention that the Examiner thinks he’s right on–not a whimsical exercise. And tracking this provides insights into the reasonableness of Examiner rejections and how counsel advises and advocates.

The appeal metric helps to evaluate the Examiner because if overturned on certain grounds of rejection (say, more often than average), this means something about the examiner and his supervisor’s ability to apply and assess rejections. After working with an examiner, there can come a point when the examiner will not budge. The PTAB then becomes the judge as to whether the Examiner is right or wrong. This same analysis works at evaluating whole group art units or technology centers.

With Practitioner Analytics, you can see the reversal rates of specific Examiners, art units, tech centers, but you can also look at the specific arguments used that have been found to be persuasive at the Board. This means that if you are dealing with a specific point of contention, say “motivation to combine” issue of obviousness, you can pull up all the cases where the PTAB overturned your examiner based on this point. The overall reversal rate and specific rationales can both validate that the Examiner is not in line with the patent laws and rules.

2

The appeal metric also helps evaluate counsel because it shows the results of when counsel feels that their position is right even when the examiner will not budge. If counsel wins on appeal, this metric confirms the counsel’s judgment and their ability to make persuasive arguments in briefs.

Even a ninja can generate allowance rate metrics. But the savvy patent practitioner looks for more context to guide prosecution strategy. Insights from the data that are carefully analyzed avoid counter-intuitive explanations.

How often do the largest patent firms appeal?

We recently reported on eight reasons to consider filing an appeal during the course of patent prosecution. Based on the current relatively low number of appeals across all applications, we suggested that some law firms may be underutilizing the appeal procedure in their practices. Now, we report the differences among the three most active patent firms. Our methods are explained in more detail at the end.

The top firms are 1) Finnegan, Henderson, Farabow, Garrett & Dunner LLP ; 2) Fish & Richardson PC; and 3) Knobbe Martens, which came from a recently published blog post on PatentlyO on the biggest firms according to total registered patent attorneys/agents. The Patentlyo blog post put Finnegan in the lead, with Fish second, and Knobbe close behind. This doesn’t mean that these firms have the most amount of patent prosecution work, but it at least puts us in the ballpark. We report here that these three firms have far different numbers of ex parte PTAB appeals.  

From July 25, 2016 to February 22, 2018, Fish & Richardson had 143 appeals. Finnegan was second with 78 appeals. And Knobbe was third with 60 appeals. While Fish and Knobbe had roughly the same number of patent applications (60,916 and 58,170, respectively) across all customer numbers searched, Fish had more than double the appeals. Even Finnegan, which totaled a third fewer applications (41,194) than Knobbe, had more appeals than Knobbe. 

The disparate number of appeals across these firms stems from a confluence of factors. One factor relates to the law firm itself. That is, a law firm may over- or under-sell the benefits of an appeal. 

Some practitioners get comfortable at preparing Amendment office action responses because it is the most common. Knowingly or not, psychological biases could influence the response strategy that a practitioner chooses or recommends to pursue. A practitioner whose recent sucessful strategy in one case might let this success influence strategy in an unrelated case simply because the strategy is more recent. Plus, projects that have fixed or capped fees favor efficiency and practitioners may opt for work that they are efficient at doing. These biases can be reinforced by billable hour incentives because prosecution can always be continued with an RCE.

Finally, to law firms’ defense, before now there hasn’t been a way to quantify the chances of succeeding on the merits of an appeal. A wealth of experience can put someone in the general vicinity, but even then is incomplete. 

So part of the reason why firm appeal rates differ is law firm-specific.

Another factor for law firms pursuing appeals at much different levels relates to the client. Some clients simply care less about the quality of patents. To them, numbers are more important. So despite the appeal procedure having several advantages for getting a good patent, it may not be necessary for some clients’ goals. An allowed application, even with narrow unusable claims, may be good enough.  

That clients drive appeals is perhaps best shown in the unequal distribution of appeals across customer numbers within a given firm. Pockets of appeals may show up disproportionately high for one customer number and low for others, suggesting that the decision to file an appeal largely depends on the client. Some clients may not like the concept of appealing. And as every lawyer knows, even excellent advice can only go so far, after which the client makes the call. 

Further still, different firms have different clientele, and some clients have more patentable subject matter than others. Often times, appeals are pursued only after options with examiners have been exhausted. So if firms operate under this paradigm (which is not being suggested that these three firms do), the clients with the less patentable subject matter might appeal more. But part of advocating includes not only accepting the client’s money, but providing realistic feedback on patentability. 

Another reason a client may shy away from an appeal may stem from a lack of trust with the practitioner. The upfront cost of appealing is not small change. And the client may interpret a strategy suggestion to appeal as a way to extract more money from the client, even with the best of intentions. Up until now, there has not been a good way to objectively convey the chances of succeeding on appeal and advancing prosection. 

But now, with Anticipat Practitioner Analytics, you can print out an unlimited number of professional reports that show how often the board overturns specific grounds of rejection relevant to a specific examiner. These include the specific points, called tags, and the legal authority, that the Board relied on in overturning similar rejections. 

For example, take a Section 101 rejection asserting abstract idea. You believe the examiner is wrong on step 1. By looking at Anticipat, you can see where the Board has overturned abstract idea rejections based on step 1 for this examiner, art unit, or tech center. With this knowledge, you can feel more confident in advising appeal. So a data driven approach can greatly improve the advice and build trust on the strategy. 

Give Anticipat a try for a 14 day free trial. Our team is happy to provide a demo. 

Methods

We looked up customer numbers for the three firms using a publicly available dataset. We then analyzed customer numbers associated with the three firms, of which there were a lot. Finnegan has at least 55 customer numbers totaling 41,194 applications. Fish has at least over 100, totaling 60,916 applications. And Knobbe has at least 50 customer numbers, totaling 58,170. We then plugged in the customer numbers into Anticipat’s Research page and tallied up the total for the relevant window of time.

Anticipat’s Mission: Help Patent Practitioners Succeed with the Best Data

We at Anticipat have a passion for improving patent prosecution by harnessing better data. We want our users to succeed in their own practices with the help of this data.

Better data includes aggregate ex parte appeals data that is relevant to grounds of rejection practitioners face. That is, an Office Action with a particular ground of rejection with specific reasoning has very likely been overturned on appeal in another application. We connect these dots for you. 

Better data also includes more general metrics, such as the reversal rates for specific grounds of rejection for a given Examiner, art unit, tech center. It also includes having the arguments and legal authority that the Board has used in overturning specific Examiner rejections.
While much of Anticipat’s initial focus and expertise are on Board data, it is only a piece of the puzzle. Our holistic approach requires data of all facets of patent prosecution, as well as a deep understanding of the context of patent prosecution procedures. We strive to further understand the incentives and behavioral decision-making patterns of all parties involved in the patent system so that proper context of USPTO statistics is understood and applied.

Only by having the best data can you optimally guide your prosecution strategy. With this arsenal of data, you can anticipate expected outcomes and put yourself in the best position for success. We hope you’ll join us on the journey. Click here to get started.

Recent Rehearing Decision Reverses Panel’s Previous Affirmance on Section 101

Losing a Section 101 appeal at the PTAB can sting. In many cases, continued examination is off the table as further amendments may not help the cause. And appealing up to the courts involves spending a lot of time and money. But there is another option: filing a request for rehearing. A recent decision shows that this procedure is not fruitless for Section 101 rejections, even if it may seem like it is. 

Recently decided Ex parte MacKay, Appeal No. 2015-008232 (September 20, 2017) reversed a Section 101 rejection that it had previously affirmed in its initial decision. In the rehearing decision, the panel was less than wordy when it acknowledged that it realized that the relied-upon identification of an abstract idea (i.e., “rules for playing a game”) may not be affiliated with the limitations recited in the claims on appeal. Instead, the claim recites the creation of a game board surface image. The panel concluded that the record failed to adequately establish that the claims at issue are directed to an abstract idea, and the rejection under 35 U.S.C. § 101 was not sustained. 

On the face, intuition might suggest that requests for rehearing are a futile endeavor. And perhaps the numbers reflect this futility. The percentage of applications that get appealed to the PTAB is quite low, 1-2%. But the percentage of appeals where the applicant files a request for rehearing is that much lower, about 1-2% of the appealed decisions. On the surface it makes sense why this procedure is rarely used. But it should not be taken out of consideration for the following two reasons. 

First, appellants may present a new argument based on a recent relevant decision of either the Board or the Federal Circuit. But unless a case comes out that supports the appellant’s position and is directly on point, the rehearing panel can easily distinguish. Plus, with such a short window between the appeal decision and the rehearing decision, unless the Board failed to consider a key case in its original decision, it would seem less likely that an appellant’s new argument saves the day.
The second reason an appellant should consider rehearing is to show that the Board misapprehended or overlooked points. See 37 C.F.R. § 41.52(a)(1). Because the same panel of judges that rendered the initial decision rules in the request for rehearing, it might seem less likely that the panel admits that it misapprehended points in their earlier decision. But it turns out that it does work, as shown in the above case.

In conclusion, if you’re feeling out of options after an unsuccessful appeal to the PTAB, consider filing a request for rehearing. It’s fast (only a few extra months of wait time for a decision) and as shown above, there’s a chance that it helps reverse the rejection. Plus the cost is miniscule compared to appealing to the Federal Circuit or Eastern District of Virginia (the other options for seeking redress of the unfavorable PTAB decision). 

When a Final Office Action comes in, consider using Anticipat. Here’s why

A first Office Action can involve a lot of guesswork. What does the Examiner mean with a particular rejection analysis? Is the Examiner serious with a particular rejection or just bluffing? Can the Examiner really get away with a particular rejection? To understand the issues and hopefully resolve them, an interview and a response with clarifying amendments and strong arguments can be critical to getting the application allowed.

But when a Final Office Action comes in, at least two pieces of the guesswork are gone: the Examiner still does not believe your application is patentable and you are likely facing an RCE. You have a lot of options: you can appease the Examiner with further claim amendments, narrowing the claims with no guarantee of an allowance. You can appeal the case to independent panels (including judges at the PTAB or Board) to evaluate the propriety of the rejections. Or you can use Anticipat.com and learn from others’ appeal information about this Examiner or art unit to learn from outcomes in similarly fought battles. This latter option can be used to guide your specific prosecution strategy, putting you on top for your client.

An important point about Examiners is that they all examine applications quite differently. For one, they have different personalities and understandings, resulting in a varied interpretation of the patent laws and rules. And these kind of personality differences tend to repeat from application to application. For example, an Examiner who is pre-occupied with unreasonably broad interpretations for one application will be preoccupied with them for another application.

Another difference in examination lies in examiners’ specific training and work group guidance, much of which stems from specific technology nuances. For every formal Guideline published by the USPTO, many other internal guidances get circulated to tech centers and art units that admonish examiners within these smaller groups how they should examine applications.

But not all of these personality quirks or internal memos comport with the patent laws or patent rules, which is where the Board comes in. The Board is the first line of defense in holding Examiners and even their SPEs accountable for the rejections they issue. When an applicant appeals a case, the Board is the first to overturn the Examiner or supervisor.  Either way, having objective evidence of this track record and lessons from these prior decisions can inform or validate a particular strategy. Even if a practitioner already knows about a particular examiner’s or art unit’s quirks, such Board data for this examiner/art unit can be used to see others’ successes in dealing with issues.

With Anticipat Research, you can quickly look up all decisions that were decided with a particular Examiner, her art unit, or tech center on any ground of rejection. So the savvy practitioner will likely want to see the decisions that were overturned on an issue this practitioner is dealing with. This research tool cuts down the time compared to public ways of finding out this information. See https://e-foia.uspto.gov/Foia/PTABReadingRoom.jsp. It also makes the searching an overall better experience. This tool pays for itself within minutes of use each month.

gilkeyresearch

Anticipat Practitioner Analytics goes a step further in making Board information actionable in one’s own prosecution practice. Simply input an application number (or Examiner name or art unit) and you can see what the Examiner’s reversal rate for a particular ground of rejection is. You can also see the exact numbers of decisions reversed and click on the specific decisions to see if the issues are as similar as they say they are.

2

Anticipat Analytics also breaks down the most persuasive arguments that the Board relied on in reversing this particular ground of rejection of interest. If appropriate, you can click on a legal authority icon that provides you with the legal authority that the Board relied on for this particular argument. What these provide for a practitioner is an outline straight from the Board of successful ways to overcome various grounds of rejection. The efficiencies and knowledge gained pay for themselves within minutes of use. To see more, watch this YouTube video.

1

Give Anticipat a try right now for a free trial.

Anticipat’s focus is simple: complete and accurate annotation of PTAB ex parte appeals decisions

Despite recent strides, the USPTO does not make it easy to extract all its data. This is especially true for ex parte appeals decisions from the Patent Trial and Appeal Board (PTAB)–even though these appeals decisions establish key data points about general patent prosecution. We discuss seven shortcomings of the PTO websites as well as Anticipat’s solution to each of these shortcomings.

1) No centralized repository – If you are looking for a decision without knowing the authority (i.e., precedential, informative, or final), you will likely have to search through three different databases on different web pages. This is because the different types of PTAB decisions are scattered across different web pages depending on the authority of the decision.

Anticipat houses all decisions in a single repository and it labels each decision with the respective authority. To date, Anticipat has all publicly available PTAB appeals decisions in its database.

2) Non-uniform and sporadic decision postings – The USPTO does not post every decision to the Final Decisions FOIA Reading Room webpage on its issued date. For example, if there are 100 decisions dated July 29, five may show up the day of July 29. Fifteen may show up on July 30 even though they are still dated and show up on the database as dated July 29. Twenty may show up on July 31. Fifty may show up August 1. Five may show up August 4. Three may show up August 5. Another one may show up August 6. And another may show up August 7. To monitor recent decisions, it can take time to keep track of which decisions have been looked at.

To fix this, Anticipat has multiple redundant scrapers to check for any backfilled decisions, making sure that every decision posted to the e-foia webpage is picked up. And it emails a recap of these annotated decisions on the 10th day to make sure that the complete set has been included.

3) Unreliable connection – Whether you’re just trying to load the main USPTO page or whether you’re searching for a particular decision, the PTO site (especially the FOIA Reading Room) can be slow or even unresponsive in letting you access data.

Anticipat solves this problem by being hosted on a scalable cloud server. The site should never be down, even during peak traffic.

4) Search functionality limited – The Final Decisions page allows limited search (e.g., date range, Appeal No., Application No., text search, etc.). But none of these searching capabilities are actually available for the 21 precedential and 180 informative decisions.

Even though the Final Decisions page allows for some search functionality, the type of searchable data underwhelms.  First, the input fields can be extremely picky.  For example, if you input an Application No. with a slash (“/”) or a comma (“,”), you get a “no results found” message. But for this particular input, the real problem is not that there are no results for the value input. Rather, it is that that you included a character not recognized by their program. This misleading message does not distinguish whether input values exist or whether the format of the query you entered simply is not consistent with the website’s expectations.  Further, there is no search capability for some of the most useful types of data: art unit, examiner, judge, type of rejection, outcome of the decision, the class, etc.

To overcome this, Anticipat permits loose input so that you unambiguously get the results you need without having to prophetically predict the required format. And it does this for decisions for each type of authority. Anticipat has also taken the time to supplement decisions with their respective application information, such as art unit, examiner, judges, grounds of rejection, outcomes, etc.  Only Anticipat’s database allows you to find all those cases using the most useful data for your analysis.

5) Unorganized data display – In addition to not being able to organize the data into one repository, as discussed in 3), the organization within the Final Decisions page is lacking. To its defense, the PTO does provide some organization to the various decisions. It organizes Final Decisions by (D) – Decision, (J) – Judgment, (M) – Decision on Motion, (O) – Order, (R) – Rehearing, and (S) – Subsequent Decision. However, the page does not allow you to display decisions by each type. Indeed, this organization of the types of decisions feels like more of an afterthought than as a way for users to effectively organize the data. Further, the organization does not go far enough. For example, within (D) – Decision are reexaminations, reissues, inter partes review, covered business methods, decisions on remand from the Federal Circuit, and regular appealed decisions.  There is no way to filter these different types of decisions from each other without manually screening all the decisions in the results list.

To fix this, Anticipat database tracks the various different types of decisions so that one can easily filter by certain subsets of decisions or search within specified subsets. Each sortable column can be sorted in ascending or descending order. Other columns of different information can be added by selecting the checkboxed fields.

6) Downtime from 1:00AM – 5:00AM EST – Every morning, the PTO takes the FOIA Reading Room website offline and performs maintenance on the website. This may not be a big deal to some people, but for someone in another time zone or just in night owl mode, this four hour wait time can cost you a lot of time in accessing your desired decision or data.

Being hosted on a cloud server, Anticipat has now regularly interrupted maintenance time. You are free to use at all hours of the day.

7) Errors – Coming from a federal government website, it’s understandable that some of the decisions data contain errors. Some errors are minor such as the name of the decision being cut off because it includes an apostrophe. Others are more consequential like mismatching a decision with another application number or combining one decision with two decisions.  Because every decision in the Anticipat database is verified using our proprietary systems, we work hard to catch and resolve the errors in the source data of every decision.

 

In conclusion, because of the above discussed deficiencies, ex parte PTAB data have been consistently overlooked because it simply cannot be effectively retrieved and analyzed by practitioners.  While you may not realize it yet, this may be costing you your time and your money. However, Anticipat.com alleviates these deficiencies. Access the Research Database here.