In 1753, Swedish botanist Carl Linnaeus introduced a system for classifying plants. His two-term classification system assigned each organism a first generic name and a second, more specific name (e.g., Homo sapiens for humans). This system was different than previous classifications, but not extraordinary. But the elegance and simplicity of his system truly was groundbreaking, paving the way for all living organisms to be systematically and uniformly classified.
As the father of modern taxonomy, Linnaeus would be stunned to see how far technology has taken classification. Entire industries have been revolutionized through improved classifying of big data. And patent prosecution is no exception. We at Anticipat are excited to introduce “tags” on the Research Database for all grounds of rejections as a way of classifying patent prosecution rationales.
If the Board decision equivalent of a genus is a ground of rejection, then the species is the tag. Anticipat Research Database has long included the more generic grounds of rejection for each appealed decision (e.g., §101, §102, §103, §112, OTDP, etc.) But because of the wide-ranging categories of common reasons why a rejection is reversed or affirmed, it becomes helpful to dig deeper than cataloging the ground of rejection. This deeper level represent the various possible points of contention. Identifying these specific categories allows you too find those specific decisions that are relevant to a certain issue without drowning in the ocean of Board decisions.
In short, a tag is a brief summary of a more granular point of contention regarding the ground of rejection. Assume that an applicant and Examiner are at odds on a ground of rejection. Depending on the rejection, this disagreement can take place over a number of finite points. Now with tags, you can easily look up the various points of contention for each ground of rejection. In other words, if you believe that a particular obviousness argument is worth pursuing, you can find decisions where the Board reversed an Examiner using that very same argument. Or if you don’t know which argument is worth pursuing, you can quickly find those arguments that have been most successful in reversing the Examiner. Here are examples of the ground of rejection/tag classification system.
- 101 – nonstatutory subject matter
Some of the tags for the ground of rejection “§101 – nonstatutory subject matter” include:
- Software/Data per se
- Abstract Idea (prima facie case, step one, step two)
- Law of Nature (prima facie case, step one, step two)
- Naturally-Occurring Phenomenon (step one, step two)
Finding all the decisions with a particular ground of rejection is just the first step. Even more useful is weeding out less relevant decisions that fall within the same ground of rejection category. Take the abstract idea rejection. There are many different types of points of contention within 101 nonstatutory rejections that are less relevant to abstract idea: computer readable medium comprises a signal, software per se, combining multiple classes, law of nature, naturally occurring phenomenon, claiming a human, etc. Even within abstract idea, there are multiple points of contention such as 1) prima facie case (that the Examiner did not even do the minimum job in identifying and/or rejecting the claim as an abstract idea), 2) step 1 and 3) step 2 of the Mayo/Alice framework. Since we at Anticipat track all of these subcategories for you, you can look up in seconds to find decisions with your desired point of contention.
The ground of obviousness under 35 U.S.C. 103(a) includes our most advanced set of tags. This makes sense as obviousness is one of the most nuanced and developed ground of rejection. We keep track of over 20 points of contention within obviousness, such as the following:
- Scope and Content of Prior Art – Broadest Reasonable Interpretation
- Examiner Bears Initial Burden (Prima Facie Case)
- Clear and Factually-Supported Articulation of Reasons for Obviousness (Prima Facie Case)
- Hindsight Reasoning (Prima Facie Case)
- Secondary Considerations
- Combining/Substituting prior art elements according to known methods to yield predictable results
- Use of known technique to improve similar devices (methods, or products) in the same way
- Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results
- “Obvious to try” – choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success
- Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one
- Proposed modification cannot render the prior art unsatisfactory for its intended purpose
- Teaching away
- Non-Analogous Art
The challenge with trying to find relevant arguments to overcoming an obviousness rejection, for example, is that any case decided by the Board could include one or more of these obviousness points. With the volume of obviousness decisions, it has been impractical to find relevant decisions.
The point of tags
You can do a lot of interesting things with tags. Take for example if you are stuck on a particular rationale used by an Examiner that you think is unreasonable. You can easily match your issue with decided cases at the Board and use Anticipat to quickly pull the decisions where the Board agreed with you. If there are very few decisions on your side, that is a valuable reality check. If there are many decisions on your side, you can review decisions to double check that your facts correspond with those in the decisions. You can also use the legal authority relied on by the Board for this particular point of contention in persuading your Examiner.
Tags will be incorporated into our soon-to-be-released Practitioner Analytics page to guide prosecution strategy. Using the Practitioner Analytics interface, tags can be ranked in the order of frequency in overturning an Examiner’s particular rejection. You can thus find better arguments faster and with more confidence. Sign up for an invitation to the soon-to-be-released Practitioner Analytics page.
Board decisions show that independent judges have agreed with the applicant’s position in a related case. They are a powerful way to check and augment your existing experience.
You don’t have to be a famous botanist to appreciate how identifying board decisions using a rejection/tag relationship is a simple but powerful way of describing how the Board decides cases today. In the aggregate, this structure provides targeted information to inform your patent prosecution strategy. At about $1 per day, Anticipat Research Database is not only incredibly affordable, at current hourly billing rates it pays for itself in just seconds a day. Try it now for free.