This part of Who Wrote Shakespeare's Works? advances a proposal for a way to proceed that might--in time--put the question to rest.
In Part I of this article, I showed that the man from Stratford could not possible the author known as Shakespeare. In Part II, I evaluated several alternative theories. Here, I propose a methodology that might make it possible to put the question to rest.
It should be possible to put the question to rest for reasonable people, at any rate. For people who are determined to believe the earth is flat, no amount of scientific evidence will ever be conclusive. Even going up in a ballon and seeing the curvature of the earth, won't do it. Such people hang onto their beliefs with grim determination, regardless of evidence or logic. There is little that can be done, alas. As long as their very identity is inextricably wedded to their belief system, any suggestion that they are wrong is tantamount to a death sentence. Naturally, any such suggestion provokes a vigorous response of self-defense. The ability to separate one's sense of self from the ideas that one holds to be true is probably the first requirement for a scientist or philospher. It requires a kind of intellectual and emotional maturity that, somehow, we need to find a way to foster in our education system.
With the kind of analytic tools we have to today, it is possible to look deeper into the authorship question than earlier generations were able to. Still, it is necessary to ask the right questions--because even the best analytic tools are useless, if not directed at the proper target. (You may have the best, most award-winning hammer in the world. But if you need to dig a ditch, it's of little use.)
Here are some that have been tried, without convincing (or corroborated) success:
While I initially had high hopes for style analysis, I reluctantly conclude that it may not be of much help. On the other hand, attempts to apply any such analysis to the entire body of works must inevitably lead to contradictions, coupled with the tempation to warp the methodology to remove them. So it is still possible that multiple stylistic tools could confirm authorship for a segment of the canon. But I will leave it to others to determine which tools those might be, and to evaluate the results. I am more focused on the question: What are reasonable segments? I think it is possible to find at least some groups of works for which a convincing argument can be made for a single author. Once identified, stylistic tools be reapplied to those segments.
There are still several tools at our disposal, however, that should prove illuminating:
Before concluding, let's examine ways in which those tools can be used.
Vocabulary analysis should make it possible to divide the works into groups that contain the same words, to determine how many authors there were. It should then be possible to assign each collection of works to one of the various candidates, with a sufficiently high level of probablity to be believable.
This is probably the easiest part. To do that analysis, each of the works is fed into a computer, one by one. Each of them is then boiled down into a list of words used in that work.
Some of the questions to answer in that process are whether "sun" and "sun's" count as one word or two and similarly, whether "ran" and "run" counts as one or two. (It may not really matter, as long as the same rules are used throughout the analysis. So doing whatever is easiest for the computer is the place to start. In some ways, doing so will reduce the amount of overlap, since different variations of a word won't be counted. On the other hand, overlaps that do exist will be that much stronger, since there is a stylistic element to a person's word choices.)
The idea would be create a matrix for the works, showing the degree of overlap between the vocabularies:
Alls Well that Ends Well As You Like It ... Alls Well that Ends Well x 2,000 (sample) As You Like It 2,000 (sample) x ... x
For example, assume that As You Like It and Alls Well that Ends have 20,000 words in common. (These are totally made-up numbers, used for illustration only.) And let's say that those plays have only 20 words in common with other works. It would then be easy to identify those two as having been written by the same author, and to distinguish them from all other works in the collection.
With that table filled in, it should be possible to group those works into sets that share a common vocabulary with each other. The first interesting question, then, will be: How many groups are there? Two? Three? A dozen? It is an analysis that is begging to be performed.
Creating the table is easy enough. The works are divided into categories (comedy, history, tragedy, poetry) and listed online at a site maintained at MIT: The Complete Works of William Shakespeare. There are exactly 42 of them, including a a "Funeral Elegy by "W.S."
Each entry in that page is a link to an online copy of the work, as well. So creating a word-list is as simple as copying the contents of the work, pasting them into a file, and then running a script (easily created with Unix utilities, for example) that breaks the file into words, sorts them, and eliminates duplicates. What is left is then a sorted list of words used in that work.
The word lists can then be matched against once another by another program to produce the percentage-overlap value. Of course, there will be words like ACT, and SCENE that are found in each of the plays. And there are words like character-names that will be unique to each of them. It would possible to filter the lists for such things, to maximize accuracy. Or it could simply be assumed that an overlap of 2% or less is insignificant, while anything more than a 98% overlap indicates an identical vocabulary.
In a perfect world, it would be easy to distinguish works that use a common vocabulary from other works that use a different vocabulary. In that scenario, the results would look something like this, after sorting the rows and columns:
A B C D E F A ~~~ 2000 2000 20 20 20 B 2000 ~~~ 2000 20 20 20 C 2000 2000 ~~~ 20 20 20 D 20 20 20 ~~~ 2000 2000 E 20 20 20 2000 ~~~ 2000 F 20 20 20 2000 2000 ~~~
In that halcyon scenario, it would be abundantly clear that works A, B, and C were authored by one individual, while D, E, and F were authored by someone else. Things are never that tidy in real life of course. The actual table of a results and at attempted grouping is shown on this page: Vocabulary Used in Shakespeare's Works.
Looking at the results in that page, it is not that case that any particular groupings stand out. Somewhere between 35% and 50% of the words used in each of the works is found in other works.
(The percentages should be inspected by reading across the row, as that is the number that was used in the calculation. Reading down a column can lead to inaccurate conclusions for works like the Elegy and A Lover's Complaint,.since they contain a smaller volume of words to start--on the order of 1,000 or 1400 words, vs. 3,000 or 4,000 for other works. Reading across the row for those works, the percentages are all in the 35-50% range (say, 700 words). But reading down the column, that count comes to only 15% of one of the larger plays--a percentage that looks worse than it is.)
But while the hoped-for grouping did not materialize, some conclusions can be drawn. For one thing, some of the works display a suspiciously large vocabulary (nearly 5,000 words), while others are at the lower end of 2,000 or so, even for a lengthy play. It is probable, then, the works at the higher end of the scale had contributions from multiple authors, while works at the lower end of the scale could easily have been written by a single author. The works displaying a larger vocabulary would therefore make the best candidates for a deeper analysis: act by act, or scene by scene.
Since simple vocabulary matching didn't produce conclusive results, the next step will be to see which words were shared. But to do that intelligently, it will be helpful to have word lists for each of the candidates. (That step in the analysis will have to wait, alas.)
Another improvement would be to eliminate the small "common" words that are used by everyone--a, an, and, as, but, by, for, if, of, or--and other such words. A comprehensive list of such words could be created by finding the words that are in all of the works (possibly after eliminating the two smallest works, of around 1,000 words each). That effort could remove one or maybe two words that slightly overstate the case for a "vocabulary overlap" for two works.)
At the very least, one result of the analysis is a verifiable accounting of the vocabulary displayed in the works. (I for one, am glad to finally have an accurate count that can be verified by others. Those words are contained in the file, WordsInTotal..lst, which displays the 28,636 words that were found in the works. The list includes verb conjugations and contractions that tend to overstate the case a bit, but the size of the vocabulary is substantial. And their use could be considered a matter of "style"--or they could be an impediment to vocabulary matching. (The jury is still out.)
For those would like to redo the analysis for themselves, and possibly tweak it a bit, this file contains the works in plain-text form, along with the scripts I ran on them, and test files: ShakespeareVocabularyAnalysis.zip.
The next step in the vocabulary analysis would be an attempt to corrolate each authorhship group with a given candidate. That process begins by amazing the collected writings of each candidate, and boiling them down to word lists, using the same script that was employed to analyze the works themselves.
Even before making such a positive attribution, however, it may be possible to see how well a variety of authorship claims . For example, using the summary of the literature in Who Wrote Shakespeare? as a guide:
Four Essays, meanwhile, claims these plays for de Vere, in addition to the sonnets: Alls Well that Ends Well (p. 85), A Midsummer Night's Dream (p. 86), Twelth Night (p. 94), Hamlet (pp. 94, 100-102) and King Lear (p. 103). Ogburn claims Troilus and Cressida for de Vere (p.75). And of course, The Real Shakespeare shows how Hamlet encodes de Vere's name using every word that starts with "ver" in the French dictionary.
So the very first test is to see how if there are vocabulary groups that match any of the divisions indicated by those proposals.
It should be possible to do an initial test by assessing the vocabulary contained in the sonnets against each candidate's vocabulary. If there is a strong match for de Vere, for example, or for the Earl of Rutland, as some have claimed--and nothing like so strong a match for no other candidates, then we can assume we found the author of the sonnets, and that the technique works. The same technique can be applied to the works for which Stanley's authorship is argued so persuasively, and to those attributed to Bacon.
At a minimum, it should be possible to rule out at least some of the candidates for some of the works, so that future efforts can be focused on candidates for whom the authorship proposition is viable. (And to the degree that other candidates can't be ruled out by such analysis, to that degree their claim is strengthened.)
A good list of initial candidates could come from Seven Shakespeares, in which Gilbert Slater argues for authorship by Francis Bacon, Edward de Vere, 17th Earl of Oxford, Sir Walter Raleigh, William Stanley, 6th Earl of Derby, Christopher Marlowe, Mary Sidney, Countess of Pembroke, and Roger Manners, 5th Earl of Rutland. That is certainly a good place to start. Although, eventually all 60+ candidates must be put to the test, if only to rule them out.
Now then, some people claim the epic poems Venus and Adonis for de Vere. Others claim them for Bacon. The first step in the vocabulary analysis should help to determine if they were likely to have been authored by the same person--or if that proposition is hignly unlikely, instead. As long some group of works has a common vocabulary that aligns solidly with that of a given candidate--and only that candidate--then it's reasonable to assume that he or she wrote that particular collection of works.
If there are two or more candidates for a given work, meanwhile, then other types of analysis can come into play--a skills accounting, for example, or geographic analysis. It should be possible to attribute at least some of the works, using this method. Then as the number of "unclaimed works" dwindles over time, those that remain can be placed under an even more highly-powered investigative microscope to determine the actual author(s).
The attempt to create a group for a given set of works is only the first step, in an attempt to pick off "low hanging fruit". For example, the epic poems Venus and Adonis have been claimed for Bacon. The sonnets, meanwhile, have been claimed for de Vere. Hamlet has been claimed for him, as well. Some of the plays, meanwhile, have been claimed for Stanley. And proponents of Rutland have laid a claim for everything.
The first question, then, is whether the vocabulary analyis tends to support some kind of grouping in the first place, or whether it argues against it. The next question, given any kind of useful groups, is whether a given group is reasonably attributed to one or more the suggested candidates--and which candidates can be summarily excluded from the list of potential candidates for a given work.
It is highly likely that only some of the possible attributions will "fall out" from this first level of analysis. For a deeper analysis, it may be necessary to divide the remaining works into sections--act by act, for example, or even scene by scene, since it is entirely possible that some of the more politically-motivated plays were a composite of created by assigning different parts to multiple authors, in order to produce them rapidly enough to have them ready in time to have the desired emotional impact on the populace.
To the degree that some of the works can be attributed in their entirety, the analysis of individual acts becomes easier. And, in turn, to the degree that individual acts can be attributed, the analysis of the remaining scenes becomes easier. (If anything remains after that analysis, it would even be possible to go at them line by line!)
For any given analysis of someone's writing "style", there may be many who argue against it, or who come to different conclusions. But the evidence of skills is much harder to refute--especially when that skills draws on a technical vocabulary that would be largely unknown to anyone who had not spent time in that area of endeavor.
Many backers of a given candidate claim that such and such a work shows evidence of medical training, for example, or seamanship. In fact, that entire body of work shows evidence of so many skills that no one author could possibly have acquired them all. A complete list is given in Who Wrote Shakespeare? (p. 18). It includes law, court etiquette, hunting, falconry, classical and esoteric philosophy, statecraft, the Bible, English and European history, classical literature, plus Greek, Latin, French, Italian, and Spanish. It also includes knowledge of Wales, Italy, and the French court at Navarre, Danish terms and customs, horticulture, garden design, music, painting, sculpture, mathematics, astronomy, astrology, natural history, fishing, medicine, psychology, the military, heraldry, exploration of the New World, celestial navigation, seamanship, printing, folklore, theatre management, Cambridge University jargon, Freemasonry, cryptography, and spy craft. (Whew!)
Whichever words you use, you are hardly likely to spend a great deal of time talking about falcons, if you are not yourself a falconer. It is even less likely that you will draw an analogy to falconry when discussing something else entirely. (For example, "getting kids to behave is harder than training a falcon"--only someone with experience would have any idea how hard it actually is, so only someone with that experience would be likely to say such thing.)
There will be exceptions, though. For example, there might have be a common saying at the time that went something like, "As fast as a falcon on the hunt". That's a phrase you might use if you had ever seen a falcon diving, or even if you had heard it frequently enough. On the other hand, if you start using a technical term for a falcon's hood, the odds go up greatly that you know something about the sport. Do it several times, and the inference is a near-certainty. In Romeo and Juliet, the author does just that (Shakespeare's Unorthodox Biography, p. xiv).
So identifying such skills won't be as simple as getting a computer and turning it loose. Computers can identify candidate passages, of course. Informed people will then need to make a judgement about each passage. Of most significance will be the number of passages that remain. If there are over a hundred, for example, we might call that a certainty. If 30 or 40, highly probable. If less than 5, certainly possible. If 2 or less, "maybe".
However, it should be possible to create a matrix of skills that belong to a particular candidates, like the one shown here. (Open cells need to be filled in with yes, no, or some qualification. For example: "Knowledge of music include composition and piano, but not singing or guitar.")
Candidate Archery Falconry Hunting Military Music Poetry Playwriting Science Seamanship ... Bacon no no no no YES1 no de Vere YES YES ??? YES YES2
??? ??? Marlowe YES Stanley YES3 ...
Note the question marks in some of the columns. We know that de Vere was a poet, because he was acclaimed as such by his peers. And we know he wrote at least some plays in his younger years, and performed them for the Queen. Does that make him a "playwright"? Possibly. Probably, in fact. But I've left it as a question until I know more.
For other things like Miltary knowledge and seamanship (as opposed to Navigation), we know he accompanied military leaders on a couple of campaigns, for a short period of time, and that he traveled by sea on a few occasions. The question is whether that level of exposure rises to the level of knowledge displayed in the works. (It could even be that some works display an amateur level of knowledge he would certainly have, while others display in-depth mastery that he did not have. The same could be true for other skills like Medicine. So perhaps such potentially sophisticated skills need two columns, each.
The use of Cambridge terminology is a critical "skill" for candidate-exclusion. If the number of words is large, then the technique of vocabulary analysis can be used with great effect. But if, as expected, the number of words is small, then
the existence of such words in a work argues against any candidate who did not attend Cambridge (for that work, at least). But again, it's a numbers game. One or two words could conceivably have been picked up in conversation. But 10 or 20 of them suggests that the author spent time there.
With the table filled in, the skill sets can then be compared to the skills exhibited in the various works, like the one shown below. For example, some of been shown to display knowledge of cutting-edge science that had not even been published, at the time. Bacon is certainly a candidate for those works. Others could have heard of such findings from Bacon, of course, but again the degree to which the knowledge is displayed and the depth of that knowledge comes into play into what is, admittedly, a probabilistic argument.)
Archery Falconry ... Romeo and Juliet Yes (Shakespeare's Unorthodox Biography, p.xiv) ...
Of course, many candidates will have had experience with falconry, for example. So the primary use for any one skill is exclusionary--the ability to say with a fair degree of certaintly that candidate X did not have that skill. The combination of multiple skills will tend to exclude so many people that only one reasonable candidate remains. (If no one remains, then the work becomes a prime candidate for multiple authors--which in turn suggests a deeper analysis, to be discussed shortly.)
The interesting aspect of that analysis is that while no one can become proficient in everything, everyone becomes proficient in some things--and the combination of skills that one posesses tends to be as unique as a fingerprint. For example, how many essay-writing technical authors are there who naver written a play or short story, but who have a great deal of experience with Irish dance, musical instruments, nutrition, and fitness pursuits--especially if you add in familiarity with Hermetics, Yoga, tantric practices, and the Shakespear authorship problem. Not many, I'll wager! (I happen to know of one--me. It's possible that there are others. But as the list of skills grows--both those clearly in evidence and those not in evidence--the number of viable candidates reduces towards one.
Geographic analysis takes into account familiarity with a given place, at a given time. Again, human inspection is needed. Most anyone could say that the Tower in Piza leans. But to say that the shadow it casts at noon touches a particular fountain, for example, pretty much requires you to have been there. And if you happened to visit the city before the tower was built, well, your visit wouldn't really count.
Similarly, at one time Bohemia included a district that had access to the
sea. But later on, it didn't. One of the works mentions the Bohemian Sea--which
implies that someone who visited earlier is likely to have written that. (On
the other hand, they could also have
heard stories from someone who went there at that time, so social network analysis could play a role here, in conjunction with an evaluation of the number and type of references.)
Again, one such statement could be something that one heard in conversation. But many such references implies that the writer was there themselves or, at the very least, they were in effect recording an interview with someone who had been.
Geographic analysis is not limited to geographic features. For example, Hamlet and the ghost that haunts him are found on "the platform before the castle". In fact, the castle at Elsinore has just such a feature, known locally as "the ghost walk". But since no other castles at the time had that feature, we have to assume that the author of Hamlet spent time there (Who Wrote Shakespeare?, p. 221). At the very least, we have to assume that someone who did made a sizable contribution to that work.
Such claims can be applied to editing attributions, as well. For example, the stage direction to have Hamlet there does not appear in the first quarto edition but does appear in the second, a year later. Similarly, in the first quarto editing, the castle is said to have a mountain in view. In the second, that line is revised to say (more accurately) that it is a hill (Who Wrote Shakespeare?, p. 221) Those revisions suggest that the first quarto was written by someone who heard about the feature, after which it was reviewed by someone who had been there, leading to that revision, among others, to correct the inaccuracy. (Rogers Manners, the Earl of Rutland is known to have spent a considerable amount of time in Denmark--something that cannot be said of de Vere or Stanley.)
At least 14 groups of authors have been proposed as authorship candidates (Who Wrote Shakespeare?, pp. 242-243). But again, those theories tend to examine the body of works as a whole, and suggest that they result from some kind of group collaboration. Those theories tend to be susceptible to the question, "With so many people in on the secret--and with some of them at odds with one another--how is it that authorship was kept secret?".
I suggest that the answer lies at least partially in multiple sequential authorship. With that theory, the works may have been written by different people, but each work is written by someone who may not even know the others. If I happen to know that a nom de plume is available, and choose to use it, I can do so without knowing for sure who originally used that alias. In addition, were I to do so for good cause--such as to avoid the tortures of the Elizabehan Star Chamber, I would be highly unlikely to raise any questions at all, for fear of causing people to dig I pit that I might very well fall into myself.
Of course, knowing that a nom de plume is available and being able to use it requires a good "connection" of some kind. So social network analysis (which defines a network of connections to and affinity for others) can be used to establish that there was a close enough relationship to make it possible. A good example of that kind of analysis was done by Professor Donald Hayes (Shakespeare Beyond Doubt?, pp. 237-248).
Given that individual authorship can be assigned to at least some of the works, those that remain should (if I'm correct) all have some sort of propaganda value--which would tend to imply that they need to be created rapidly, so parts were assgined to multiple authors.
In that case, social network analysis can provide additional evidence (or counter-evidence) for likely candidates. If vocabulary, geographic, and skills analysis suggests that there are several candidates who could have contributed parts to a particular work, then social connections could provide a good rationale for preferring a particular choice.
There are several circles to examine: The one at Gray's Inn is a strong possiblity. But there is also one surrounding the Countess of Pembroke, Bacon's group, and the one around Sir Francis Drake. (As for the ability to keep such things secret, the period is noted for "secret societies". Although the one we know most about is the one surrounding Drake, mostly because they failed to keep their secret!) For a smal, close-knit group--especially one with the power of the state behind it-- maintaining secrecy would not be terribly difficult.
For that works that do require an argument for collaborative authorship, the question remains, "How was such a thing kept secret, even after it was no longer important?" One answer is that if it were an important secret, you would not tell others--not even close relatives, in an age of manifold spies and informants. And you would certainly never write anything down. So succeeding generations would never know. Another answer is suggested in Four Essays, where it is suggested that de Vere's involvement would cause questions of succession to be raised, given that de Vere was of royal blood. The short final essay contains a summary (pp.115-118), but much of the evidence is amassed and put on display in the 3rd essay (pp. 73-113).
Stanford university makes available a good ontology tool (a tool for recording relationships between concepts), called Protégé. That tool makes it possible to create a graph that can be inspected online. For example, each of the works can be linked to a list of words it contains, and to a list of skills it exhibits. Similar links can be created for each of the candidate authors.
A proposition like, "Bacon knew about science" can then be linked to a set of supporting claims like, "Bacon knew about physics", or "Bacon knew about (some scientific theory that wasn't published until much later".
The really fascinating part about such an ontology, however, is the ability to distinguish claims from evidence. In that ontology, a claim can be a statement I've made, or it could be a citation of a claim that someone else has made. Here for example is a claim I've made in this article: "Shaksper's handwriting samples consist of six recorded signatures, all of which were spelled differently, except for two that were the same". In an attempt to "prove" the assertion, I've cited an online article (Shakespeare Authorship Question, Education and Literacy).
But it is important to note that the citation merely points to a claim made by someone else. It is only when we examine that claim that we can find the evidence--the reproduction of the six signatures that allows an independent examiner to evaluate them and say that yes, the spellings are different and, moreover, they were written by a different hand in each case!
It's an important distinctiion to bear in mind when evaluating an argument--because a circle of claims all pointing to one another doesn't prove a thing. It's only when at least one of the claims reduces to actual evidence that the argument has a foundation. As we saw in the wikipedia page devoted to the authorship question, claims can be falsely cited as "proof" (evidence), when in fact they are merely references to other claims.
In this article, I've tried to keep the distinction clear. But it's possible for anyone to slip up. With an ontology, however, every claim and every bit of real evidence is labeled as such. Using such a tool, it would be possible to rapidly drill down to actual evidence--and even to show that all of the people who have made claims about those signatures are basing their conclusions on the same six pieces of evidence. That capability makes it possible to drop the citations out of the picture, leaving only one's assertions and the evidence for them.
An ontology of that kind could be made available online, where it would be open to inspection. And, as in a wiki, multiple scholars could collaborate on it. (Meanwhile, those of us who fail to qualify as acknowledged "scholars" could make suggestions and contribute to the discussion.)
Of course, that kind of open ontology would be anathema for the Stratfordians. The "evidence" cited by Stratfordians is almost invariably a claim made by someone else. So the claim that "Shaksper must have learned about falconry" (if one assumes that Shaksper wrote the work, and it shows evidence of that skill), then the only way to "prove" it is by citing someone else who made the same assumption--because there is absolutely no evidence that can be produced to back any such claim.
An ontology would all too clearly expose the abject weakness of their case. Clearly distinguishing claims made by others from actual documents obtained from the period would clearly show the total lack of solid, collaborating evidence for the Stratfordian candidate--especially when contrasted with the evidence that is available for known authors and actors of the period.
Fuzzy Logic is a branch of mathematics that turns quanitites into qualitative values--generally, by dividing the quantities into ranges, and then overlaping range, and giving each range a name.
Computers, of course, excel at quantitative calculations. But people work better with qualitative values. So it's a great way to bridge the gap. (In addition, people can write and read control programs that use fuzzy terms much more easily than they can write programs based on numerical analysis.
That kind of logic was used to program Japan's high-speed trains. The control programs say things like, "If going slow, speed up slowly. If going faster, speed up more rapidly. If going fast,..." The terms "slow", "fast", and "faster" each correspond to a numerical range. The ranges overlap (they're "fuzzy", remember) so at any given speed, multiple ranges apply. The computer then assigns a weight to each of the ranges.
For example, a given speed might be at the very high end of "slow", and at the medium-low end of "faster". The logic would then apply a weighting (say 20% and 80%), and then examine the rules that say how much acceleration to apply. Each of the accelration values is a range, too. Weighting the calculations produces an exact value based on the acceleration ranges. The computer is good with exact values, so it uses that value to accelerate the train.
Doing those calculations many times a second, it comes up with a series of precise values that gradually change over time. So the train starts extremely slowly as it comes out of the station, and gradually accelerates as it moves along. It then reverses the process when coming into the station. The train operates so smoothly, as a result, it's said you never have to hold a strap!
Logic of that kind might well be useful in assigning authorship. We should be able to define ranges for geographic analysis, skills accounting evaluations, and vocabulary analysis. Combining those results should give us a respectable--and readable--evaluation for probability of authorship for a given candidate. So instead of looking at a table and reading "32", for example--it should be possible to read something like "reasonably probable". When compared to someone who has a rating of "highly probable" or "highly improbable", the inference is fairly clear.
The use of fuzzy terms could make it easier to assign authorship--because in Fuzzy Logic terms, a candidate does not need to have a definitive claim for a single work. Instead, there can be a degree of membership in the author-candidate pool for that work. Similarly, each work can have various claims of authorship, to varying degrees. For example, one work might display strong evidence for a knowledge of Falconry, while another displays weaker evidence.
Fuzzy Logic might (I say again, might) then provide a mechanism for organizing and combining those claims, in a way that makes it easier to identify the most likely candidates. (On the other hand, it may prove to be yet another of the wonderfully intriguing red herrings that bedevil the investigations into the authorship question! But given that it is a fun branch of mathematics that makes it possible to reason intelligently in the absence of absolute certainty, I thought I would mention it.)
In this day and age, the tools used for analysis should be readily available, and the data should be easily shared. As mentioned, an ontology tool like Protégé would be useful for recording citations and evidence in a way that can be rapidly edited by collaborators, and checked by others. Changing a few assumptions could easily change the conclusions. Experts can then argue over their assumptions, for a change, instead of their conclusions.
The goal, in the end, is to provide a reasonable assessment of the evidence in the areas of vocabulary, skills, and geographic references. Social network analysis would also come into play for any works for which the measurements suggest collaboration.
The assessments in each area can be stated as a percentage, which equates to a probability--a value between 0 and 1.0, expressed as a percentage, where 100% means absolutely valid, and 0% means absolutely unknown.
Each work, would produce a list in each area: A list of words used, a list of skills represented, and a list of geographic references. Each candidate, in turn, would be evaluated against those lists:
The resulting percentages can then be averaged to produce a probability of authorship for that candidate. (They could also be multiplied, but averaging makes more sense. If a candidate had a 90% rating in all three areas, for example, the average would be 90%. But multiplying the values, as one generally does for probabilities, would produce a value of only 73%. Either method would produce comparable relative values for the candidates, but averaging produces a value that matches that conclusion that one would intuitively draw from such high percentages.)
Once a table has been created for each work, and one for each candidate, then a summary table can be created that evaluates each of the candidate authors for that work. Such a table would look something like this:
As You Like It Vocabulary Skills Geographics Average Edward de Vere nn% nn% nn% nn% William Stanley Francis Bacon ... William Shaksper
Fuzzy Logic might play a role in the resulting analysis--or it could be that simple, non-overlapping ranges would be more useful. So a value of 98% or might be listed as "certain", 95% or more as "near certainty", 80-95% as "highly probable", etc. So for ease of reading, the numerical values could be replaced by readable phrases.
Analysis of that kind should prove decisive for assigning the works--especially when the analysis is repeated act by act, and scene by scene, for any works in which attribution is not clear.
Of course, that kind of the methodology is the last thing that the Stratfordians want to see. After all, there is evidence that de Vere engaged in falconry. For Shaksper, there is none. We have examples of Stanley's writing. For Shaksper, there is nothing. And we have records of other candidate's travels. For Shaksper, nothing. So it is possible to formulate reasonable conclusions for every candidate except Stratford, all backed by solid evidence.
Such an analysis would make all to visible the fact that, for Shaksper, the values are 0%, 0%, and 0% across the board. And Shaksper is the only proposed candidate to draw a total blank in every column. So, were the results of such a study ever to be published, the case for Stratford would be unequivocally demonstrated to be that of an emperor without clothes--a totally nude, totally ridiculous specimen of humanity.
Of course, all the analysis in the world won't be the same as a "smoking gun" that points an undeniable finger towards a single author--but if in fact there is no single author, any attempt to find such a smoking gun is doomed at the outset. But once the canon is divided into similar-vocabulary groups, it should be possible to provide good arguments--backed by evidence--for the authorship of each group.
Analysis of that kind won't be totally conclusive, of course. It won't be the same as finding a letter that says, "candidate X wrote work Y". (If there were, it would have been found by now--or even destroyed by the preservers of orthodoxy.) But it should be possible to say, "These three candidates were in such and such a place, or had such and such a skill. But only this candidate was in that place, with this particular skill, so we are 99% sure that he is the author."
At bottom, then, the real question is not, "Who wrote Shakespeare's works?" but rather, "Which works were written by which authors, either singly or in collaboration?" Providing answers for that question will teach us a lot about history, about the workings and political intrugue of Elizabethen society, and about the internal motivation of the various authors.
§ Home · Books · Health · Music · Dance · Golf · Yoga · Essays · Store §
Copyright © 2014
by Eric Armstrong. All rights reserved.
Subscribe for announcments.
Contact me to send feedback, make a donation, or find ways to help others.
And by all means, be sure to visit The TreeLight Store.