In a recent Enoch Seminar Online review, Atar Livneh calls attention to an important book on Rewritten Scripture that warrants the attention of those working on the topic:
Ariel Feldman and Liora Goldman, Scripture and Interpretation: Qumran Texts that Rework the Bible. Edited and introduced by Devorah Dimant. Beihefte zur Zeitschrift für die alttestamentliche Wissenschaft 449. Berlin: de Gruyter, 2014.
The book has both editions and commentaries on selected Rewritten Scripture texts from Qumran that promise to be of great interest. And of course, in light of the recent Review of Biblical Literature decision to close off public access to their reviews, it's always nice to find helpful reviews conveniently published online!
This blog is intended to be an outlet for research and questions on the textual criticism of the Old Testament/Hebrew Bible and related issues.
Thursday, January 7, 2016
Tuesday, December 29, 2015
Coherence-Based Genealogical Method
For those interested in keeping up with methodological discussions in New Testament textual criticism, TC: A Journal of Biblical Textual Criticism has released the results of a panel discussion on the Coherence-Based Genealogical Method that might be of interest. I have personally benefitted much over the years from methodological discussions with NT colleagues and am a strong proponent of interdisciplinary dialogue. The supposed gulf between OT and NT textual criticism is often grossly exaggerated to justify isolationism and methodological weaknesses, and the two fields are essentially more similar than different, in my opinion. The CBGM is the primary attempt to handle the mass of textual evidence statistically and undergirds ongoing revisions to the Nestle-Aland text of the NT via the Editio Critica Maior. It is quite complicated and remains controversial, but it is well worth trying to get at least a big-picture view of the methodological discussion.
For those who don't have time to work through the details or need help keeping the big picture in mind, I will briefly summarize/simplify the steps that go into this process at present.
1. Transcribe all extant Greek witnesses into an electronic format (XML). When I worked for the IGNTP project on John, two transcribers transcribed the text of each manuscript independently by changing a base text (textus receptus) to match the manuscript being transcribed. They include information about text, layout, lacunae, and corrections in a Word document according to a set pattern. A senior scholar then automatically compared the two transcriptions and personally reconciled any conflicts, yielding extremely accurate final transcriptions. A technical officer then ran a script to convert the transcriptions into XML format. I think they are now starting to use an online transcription editor that creates the XML directly. In my opinion, NT scholars do a much better job than OT scholars at exhausting the direct manuscript tradition and storing the raw data in electronic format, though they probably have much to learn from OT scholars on the use of versions, which unfortunately feature minimally in the CBGM.
2. Automatically collate the electronic texts and correct the automatic collations as a human editor to isolate appropriate variation units (strings of text where variation occurs) and the variant readings. In this process, the editor "regularizes" the spelling to eliminate minor orthographic differences and identify meaningful agreement and disagreement in collated variant readings.
3. Automatically calculate the statistical proximity of each text to each other text based on the percentage agreement in variation units. They call this "pre-genealogical coherence," because it reflects their absolute statistical level of agreement without respect to the nature of the agreements or actual genealogical relationships. In other words, variants are counted first, and only later weighed and recounted.
4. Attempt to relate the variant readings in each variation unit to each other by a local stemma (i.e., a stemma of variant readings) that explains the direction of development of particular readings from other readings. At this point, textual critics use the internal and external evidence (including ancient versions) in basically the same way as TC has traditionally been practiced. Pre-genealogical coherence serves as an important criterion in evaluating external evidence at this stage.
5. By evaluating and recording each variation unit (though it is possible to include only variation units that can be confidently adjudicated at the initial stage, if you want) on its own, you create a database of all your decisions about the initial text and local stemmata of readings.
6. Since this database records both your proposals for the initial text and the directions of development of readings from others, the computer can work out the ramifications of your decisions for the textual flow of the tradition called the "genealogical coherence." It can also flag up decisions which create incoherent pictures of textual transmission, which can then be reconsidered in closer detail. For instance, if you said that one reading developed from another, but that decision implies textual relationships that do not fit with your other decisions, you can reconsider the decision in light of a clearer picture the relationships between textual states. It is important to note that the computer does not make the textual decisions for you, but only keeps track of all your previous decisions simultaneously and points out inconsistencies.
7. Once problematic variation units are flagged up, you can reexamine them in light of the picture of the general textual flow resulting from your decisions. If you did not come to a conclusion about a particular issue in your initial evaluation, you can now reexamine it with the additional information as well. In other words, you can use the overall consistency or "coherence" of the textual relationships resulting from your text-critical decisions as a criterion for reevaluating and refining your prior conclusions. This is where the complicated parts of the CBGM really come in, and I really haven't used it enough to give a thorough treatment, but I'll try to highlight a few important points.
I hope that this simplistic summary of a very complex (but not impenetrable) process will be helpful, and of course NT colleagues are welcome to offer corrections to any misrepresentations! I applied some basic principles of this approach in my dissertation (though not fully) on the Dead Sea Scrolls containing Exodus to great effect, and I would encourage consideration by other textual scholars looking to expand their methodological repertoire.
For those who don't have time to work through the details or need help keeping the big picture in mind, I will briefly summarize/simplify the steps that go into this process at present.
1. Transcribe all extant Greek witnesses into an electronic format (XML). When I worked for the IGNTP project on John, two transcribers transcribed the text of each manuscript independently by changing a base text (textus receptus) to match the manuscript being transcribed. They include information about text, layout, lacunae, and corrections in a Word document according to a set pattern. A senior scholar then automatically compared the two transcriptions and personally reconciled any conflicts, yielding extremely accurate final transcriptions. A technical officer then ran a script to convert the transcriptions into XML format. I think they are now starting to use an online transcription editor that creates the XML directly. In my opinion, NT scholars do a much better job than OT scholars at exhausting the direct manuscript tradition and storing the raw data in electronic format, though they probably have much to learn from OT scholars on the use of versions, which unfortunately feature minimally in the CBGM.
2. Automatically collate the electronic texts and correct the automatic collations as a human editor to isolate appropriate variation units (strings of text where variation occurs) and the variant readings. In this process, the editor "regularizes" the spelling to eliminate minor orthographic differences and identify meaningful agreement and disagreement in collated variant readings.
3. Automatically calculate the statistical proximity of each text to each other text based on the percentage agreement in variation units. They call this "pre-genealogical coherence," because it reflects their absolute statistical level of agreement without respect to the nature of the agreements or actual genealogical relationships. In other words, variants are counted first, and only later weighed and recounted.
4. Attempt to relate the variant readings in each variation unit to each other by a local stemma (i.e., a stemma of variant readings) that explains the direction of development of particular readings from other readings. At this point, textual critics use the internal and external evidence (including ancient versions) in basically the same way as TC has traditionally been practiced. Pre-genealogical coherence serves as an important criterion in evaluating external evidence at this stage.
5. By evaluating and recording each variation unit (though it is possible to include only variation units that can be confidently adjudicated at the initial stage, if you want) on its own, you create a database of all your decisions about the initial text and local stemmata of readings.
6. Since this database records both your proposals for the initial text and the directions of development of readings from others, the computer can work out the ramifications of your decisions for the textual flow of the tradition called the "genealogical coherence." It can also flag up decisions which create incoherent pictures of textual transmission, which can then be reconsidered in closer detail. For instance, if you said that one reading developed from another, but that decision implies textual relationships that do not fit with your other decisions, you can reconsider the decision in light of a clearer picture the relationships between textual states. It is important to note that the computer does not make the textual decisions for you, but only keeps track of all your previous decisions simultaneously and points out inconsistencies.
7. Once problematic variation units are flagged up, you can reexamine them in light of the picture of the general textual flow resulting from your decisions. If you did not come to a conclusion about a particular issue in your initial evaluation, you can now reexamine it with the additional information as well. In other words, you can use the overall consistency or "coherence" of the textual relationships resulting from your text-critical decisions as a criterion for reevaluating and refining your prior conclusions. This is where the complicated parts of the CBGM really come in, and I really haven't used it enough to give a thorough treatment, but I'll try to highlight a few important points.
- Texts are abstracted from the material contexts in which they are found (a major point of controversy), so a historically later "text" can theoretically be hypothesized as a source for a historically earlier "text." The implication of this is that the CBGM does not produce a stemma of manuscripts intended to reflect the historical relationships between manuscripts, but rather the general flow of the text between known states of the text. In this regard, the CBGM is better suited for reconstructing the initial text and isolating important texts than it is to tracing the history of the text in real time.
- Search for an optimal number of source texts to explain each known state of the text. On the assumption of a generally conservative transmissional tendency, try to identify which texts would be required to provide sufficient source material to explain the origins of the readings without needlessly multiplying sources. Texts can have multiple ancestors, but the number of ancestors hypothesized should be kept to the minimum required to explain the evidence sufficiently. Potential ancestors that explain many readings in the text are to be preferred to those with only occasional helpful source material. Potential ancestors should be sought in documented states of the text, not reconstructed hyparchetypes. Again, this process is not making a claim that the manuscript itself was copied from these sources, but only that its text somehow inherited text from the textual states now documented in the other manuscripts.
- Local stemmata that fit coherently within the general textual flow are typically to be preferred to those that do not.
- Readings emerging in distant texts may be explained as contamination/mixture or as having been created independently multiple times in the tradition. In other words, the overall coherence of the tradition can be used to clarify just how "indicative" a variant reading is for genealogical relationships. Some readings that are shared secondary readings may simply be coincidence or from occasional mixture.
- On the supposition that the impact of contamination/mixture between different texts was typically less extensive than that of normal copying processes, the coherence of the general textual flow can be used to minimize or bypass the occasional effects of contamination and accurately trace the directions of the flow of the text, even in an open (i.e., contaminated) system.
I hope that this simplistic summary of a very complex (but not impenetrable) process will be helpful, and of course NT colleagues are welcome to offer corrections to any misrepresentations! I applied some basic principles of this approach in my dissertation (though not fully) on the Dead Sea Scrolls containing Exodus to great effect, and I would encourage consideration by other textual scholars looking to expand their methodological repertoire.
Wednesday, December 16, 2015
Glassman Holland Research Fellowship
The Albright Institute of Archaeological Research has posted a call for applications for a three-month postdoctoral fellowship available to European researchers here. You can find more information and apply here. The AIAR also has a number of other good fellowships to consider. I spent six months there this spring and enjoyed my time immensely. They have a great team and research environment, and I highly recommend it to those who may be interested.
Monday, November 30, 2015
SBL 2015 - Part 2
In part 2 we will survey some of the text-critically relevant papers from SBL 2015, though there were many others I was not able to attend.
The combined Textual Criticism of the Hebrew Bible/Philology in Hebrew Studies session with the theme "Theory and Practice in Textual Criticism: The HBCE Project" was a stimulating session. Sidnie White Crawford discussed the use of the Temple Scroll for her edition of Deuteronomy, suggesting that it has affinities with the Septuagint, but that its unique readings are always secondary. Interestingly, the TS reads an imperfect based on יבחר "will choose" against the perfect בחר "has chosen" in Deuteronomy, further demonstrating that the variant readings are both old readings. Ronald Troxel gave a heavily theoretical paper on what exactly is the nature of "text," insisting that "text" is a socially constructed, unifying concept that supersedes its multiple instantiations in manuscripts. Ingrid Lilly pushed back on how overly rigid generic categories can lead scholars to make textual decisions based on literary expectations that may be foreign to the works they are examining.
Brandon Bruning suggested that the phrase מראת הצבאת in Exodus 38:8[Heb] should be taken as "visions concerning the troops," explaining it in the context of the construction of the tabernacle according to the pattern "shown" to Moses throughout Exodus. Jason Bembry suggested that the LXX-B reading "his concubine went away from him" in Judges 19:2 came first (there is also no hint of sexual immorality in Josephus), which later interpreters took rather as the woman "committing harlotry" or "getting angry at the man". Julio Trebolle Barrera presented a very detailed paper demonstrating that documented redactional seams often occur at points marked in manuscripts by vacats to indicate text segmentation. Urmas Nõmmik suggested some specific, undocumented literary critical developments in the Hebrew tradition of Job based on comparison with the Old Greek text. And Seth Adcock suggested that the shorter text of Jeremiah 10 was abbreviated from the longer text to accommodate an apotropaic usage in light of an interpretation of the Aramaic verse 10:11.
In a session on recognizing the Kaige recension in the historical books, Andrés Piquer Otero examined a number of cases where good old Georgian readings permit the identification of Kaige readings in the Lucianic text. Tuukka Kauhanen proposed a diagnostic model for identifying Kaige based on observable symptoms in the text, in much the same way doctors diagnose illnesses from symptoms. Pablo Torijano Morales argued that Ra 460 should be considered an Antiochean or Lucianic manuscript especially closely related to 700, yielding now seven Antiochean manuscripts in Kings (19-108 82-93-127-460-700). Julio Trebolle Barrera showed that 158 and 56-246 have numerous Antiochean readings inserted into their generally Kaige texts, often in the form of doublets.
In a session on textual criticism of the Pentateuch and Daniel, I argued that preserved manuscript remains and reconstructions suggest that approximately half of the copies of the book of Exodus evident from the Qumran remains were in fact situated in large pentateuchal collections, in most cases probably complete Torah scrolls. I illustrated the process of reconstructing 4QExod-c as a complete Torah scroll by showing that Exodus began in the middle of a column (suggesting it was preceded by Genesis) and that the circumference of the scroll was so large that it must have contained a text approximately the same length as the rest of the Pentateuch. David Rothstein showed how a variant reading in 4QPhyl-k and several Kennicott manuscripts finds reflexes in later rabbinic interpretations of Deut 11:4, with the waters pursuing the Egyptians. Dan McClellan supported the interpolation theory to explain the occurrence of the "angel" of the Lord and suggested cognitive scientific parallels to his proposed development of the concept. Amanda McGuire noted and evaluated the many differences between the Old Greek and MT/Theodotion in Daniel 9:27.
In a joint Aramaic Studies/Qumran session in honor of Moshe Bernstein, Edward Cook addressed the complications of distinguishing between ambiguity, polysemy, and contextual variation in lexicography. I was unfortunately unable to attend a talk by Loren Stuckenbruck on the translation of Aramaic forms into Greek in several works composed in Aramaic, as well as one by Jan Joosten on the need to look broadly at the history of Aramaic to read texts like the Genesis Apocryphon. Daniel Machiela explored the use of wisdom motifs in unexpected places in various Aramaic text. And Michael Segal suggested that Daniel 6 (particularly in the MT tradition) was assimilated to parallels in Daniel 3 and in Esther.
An entire IOSCS session was devoted to reviewing Frank Shaw's The Earliest Non-Mystical Jewish Use of IAO, with responses by Ronald Troxel, Kristin De Troyer, Robert Kraft, and Martin Rösel. In this book Shaw discusses the earliest evidence for the use of ιαω for the tetragrammaton, though he never comes down conclusively on the question of whether or not this transliteration was originally used by the Septuagint translators. The respondents were generally appreciative--though with critical feedback--but Martin Rösel disagreed sharply at points.
A good Qumran session rounded off the conference, with Matthew Goff suggesting that rabbinic sources can shed light on the fragmentary Qumran material from the Book of Giants. Seth Adcock reiterated his defense of the longer text of Jeremiah 10. Moshe Bernstein gave a review of early generic classifications of the Genesis Apocryphon, as well as noting their weaknesses and reflections in contemporary discussions. I then suggested a number of textual groups and statistical clusters that can be identified from within the Qumran corpus of Exodus materials, perhaps most importantly a newly-recognized tight group consisting of 4QpaleoGen-Exod-l and 4QExod-c. Ira Rabin then examined the results of her chemical analysis of several scrolls, suggesting that 1QIsa-a, 1QS, and 1QSb were prepared according to the same process. As usual, she included a number of little gems, such as explaining how--before the use of lime treatments for parchment, which dissolves the fat layer between the layers of skin and fuses them into a single layer--ancient parchment preparers could split the skins into two separate layers, producing very fine writing supports.
All in all, it was a great conference with many challenging topics. I got to meet many new people and catch up with old friends, and I consider the conference a great success.
The combined Textual Criticism of the Hebrew Bible/Philology in Hebrew Studies session with the theme "Theory and Practice in Textual Criticism: The HBCE Project" was a stimulating session. Sidnie White Crawford discussed the use of the Temple Scroll for her edition of Deuteronomy, suggesting that it has affinities with the Septuagint, but that its unique readings are always secondary. Interestingly, the TS reads an imperfect based on יבחר "will choose" against the perfect בחר "has chosen" in Deuteronomy, further demonstrating that the variant readings are both old readings. Ronald Troxel gave a heavily theoretical paper on what exactly is the nature of "text," insisting that "text" is a socially constructed, unifying concept that supersedes its multiple instantiations in manuscripts. Ingrid Lilly pushed back on how overly rigid generic categories can lead scholars to make textual decisions based on literary expectations that may be foreign to the works they are examining.
Brandon Bruning suggested that the phrase מראת הצבאת in Exodus 38:8[Heb] should be taken as "visions concerning the troops," explaining it in the context of the construction of the tabernacle according to the pattern "shown" to Moses throughout Exodus. Jason Bembry suggested that the LXX-B reading "his concubine went away from him" in Judges 19:2 came first (there is also no hint of sexual immorality in Josephus), which later interpreters took rather as the woman "committing harlotry" or "getting angry at the man". Julio Trebolle Barrera presented a very detailed paper demonstrating that documented redactional seams often occur at points marked in manuscripts by vacats to indicate text segmentation. Urmas Nõmmik suggested some specific, undocumented literary critical developments in the Hebrew tradition of Job based on comparison with the Old Greek text. And Seth Adcock suggested that the shorter text of Jeremiah 10 was abbreviated from the longer text to accommodate an apotropaic usage in light of an interpretation of the Aramaic verse 10:11.
In a session on recognizing the Kaige recension in the historical books, Andrés Piquer Otero examined a number of cases where good old Georgian readings permit the identification of Kaige readings in the Lucianic text. Tuukka Kauhanen proposed a diagnostic model for identifying Kaige based on observable symptoms in the text, in much the same way doctors diagnose illnesses from symptoms. Pablo Torijano Morales argued that Ra 460 should be considered an Antiochean or Lucianic manuscript especially closely related to 700, yielding now seven Antiochean manuscripts in Kings (19-108 82-93-127-460-700). Julio Trebolle Barrera showed that 158 and 56-246 have numerous Antiochean readings inserted into their generally Kaige texts, often in the form of doublets.
In a session on textual criticism of the Pentateuch and Daniel, I argued that preserved manuscript remains and reconstructions suggest that approximately half of the copies of the book of Exodus evident from the Qumran remains were in fact situated in large pentateuchal collections, in most cases probably complete Torah scrolls. I illustrated the process of reconstructing 4QExod-c as a complete Torah scroll by showing that Exodus began in the middle of a column (suggesting it was preceded by Genesis) and that the circumference of the scroll was so large that it must have contained a text approximately the same length as the rest of the Pentateuch. David Rothstein showed how a variant reading in 4QPhyl-k and several Kennicott manuscripts finds reflexes in later rabbinic interpretations of Deut 11:4, with the waters pursuing the Egyptians. Dan McClellan supported the interpolation theory to explain the occurrence of the "angel" of the Lord and suggested cognitive scientific parallels to his proposed development of the concept. Amanda McGuire noted and evaluated the many differences between the Old Greek and MT/Theodotion in Daniel 9:27.
In a joint Aramaic Studies/Qumran session in honor of Moshe Bernstein, Edward Cook addressed the complications of distinguishing between ambiguity, polysemy, and contextual variation in lexicography. I was unfortunately unable to attend a talk by Loren Stuckenbruck on the translation of Aramaic forms into Greek in several works composed in Aramaic, as well as one by Jan Joosten on the need to look broadly at the history of Aramaic to read texts like the Genesis Apocryphon. Daniel Machiela explored the use of wisdom motifs in unexpected places in various Aramaic text. And Michael Segal suggested that Daniel 6 (particularly in the MT tradition) was assimilated to parallels in Daniel 3 and in Esther.
An entire IOSCS session was devoted to reviewing Frank Shaw's The Earliest Non-Mystical Jewish Use of IAO, with responses by Ronald Troxel, Kristin De Troyer, Robert Kraft, and Martin Rösel. In this book Shaw discusses the earliest evidence for the use of ιαω for the tetragrammaton, though he never comes down conclusively on the question of whether or not this transliteration was originally used by the Septuagint translators. The respondents were generally appreciative--though with critical feedback--but Martin Rösel disagreed sharply at points.
A good Qumran session rounded off the conference, with Matthew Goff suggesting that rabbinic sources can shed light on the fragmentary Qumran material from the Book of Giants. Seth Adcock reiterated his defense of the longer text of Jeremiah 10. Moshe Bernstein gave a review of early generic classifications of the Genesis Apocryphon, as well as noting their weaknesses and reflections in contemporary discussions. I then suggested a number of textual groups and statistical clusters that can be identified from within the Qumran corpus of Exodus materials, perhaps most importantly a newly-recognized tight group consisting of 4QpaleoGen-Exod-l and 4QExod-c. Ira Rabin then examined the results of her chemical analysis of several scrolls, suggesting that 1QIsa-a, 1QS, and 1QSb were prepared according to the same process. As usual, she included a number of little gems, such as explaining how--before the use of lime treatments for parchment, which dissolves the fat layer between the layers of skin and fuses them into a single layer--ancient parchment preparers could split the skins into two separate layers, producing very fine writing supports.
All in all, it was a great conference with many challenging topics. I got to meet many new people and catch up with old friends, and I consider the conference a great success.
Saturday, November 21, 2015
SBL 2015 - Part 1
The first SBL session I went to today was a very helpful session on critical editions from the German Bible Society. Richard Weis reviewed in detail the historical development of editorial principles that set the stage for BHQ, which he illustrated from examples in the new Genesis volume by Abraham Tal. Kay Joe Petzold discussed the editorial concepts behind the Masorah in BHS and BHQ, the latter of which is a more strictly diplomatic presentation of the Masorah in the Leningrad Codex, against Gérard Weil's attempts to reconstruct a single Masorah against L in BHS.
On the New Testament side, Holger Strutwolf announced the new NA/UBS editorial committee: Christos Karakolis, David Parker, Stephen Pisano, David Trobisch, and Klaus Wachtel. David Trobisch then gave a preliminary look at key issues discussed by the committee, stressing his desire to rearrange the books of the NT according to the order of ancient manuscripts.
Joseph Sanzo and Ra'anan Boustan stressed the difficulties of identifying Christian or Jewish socio-religious backgrounds to ancient magical texts and artifacts, given shared cultural elements. This, or course, is a complex problem in trying to understand the background of Septuagint manuscripts as well.
On the New Testament side, Holger Strutwolf announced the new NA/UBS editorial committee: Christos Karakolis, David Parker, Stephen Pisano, David Trobisch, and Klaus Wachtel. David Trobisch then gave a preliminary look at key issues discussed by the committee, stressing his desire to rearrange the books of the NT according to the order of ancient manuscripts.
Joseph Sanzo and Ra'anan Boustan stressed the difficulties of identifying Christian or Jewish socio-religious backgrounds to ancient magical texts and artifacts, given shared cultural elements. This, or course, is a complex problem in trying to understand the background of Septuagint manuscripts as well.
Friday, November 20, 2015
Evangelical Theological Society 2015
The Evangelical Theological Society held its annual meeting in Atlanta from 17-19 November, and I thought I would summarize some of the text-critically relevant papers for those who did not attend.
Russell Fuller and Richard McDonald argued that the study and teaching of Hebrew should be based on the use of Arabic grammatical categories, since it is the closest living language with a long history of grammatical analysis. They suggested that modern linguistic approaches have led to more confusion than insight, and that the use of native Semitic grammatical analyses better explains many phenomena. I must admit that the idea that we even need a paradigm language seems to me unnecessarily limiting. Neither does native Arabic grammar seem to me necessarily to be the best tool for studying Hebrew grammar. Nevertheless, it was a good reminder that we stand in a long line of grammatical tradition, and we would do well not to neglect the study of earlier grammarians and cognate languages.
Benjamin Giffone presented a theological paper on the problem for Evangelical bibliologies of defining a single, definitive text in a tradition that was repeatedly edited. He suggested that it is all but inevitable to have to appeal to "Catholic" arguments from community determination. He raised many insightful, probing theological questions, but unfortunately had no particular answer to give.
Eric Tully presented a helpful model for distinguishing between textual variants in a source language text and translation shifts in a target language text. He suggested gradually accumulating a database of tentative conclusions on individual readings, which can then inform later decisions or be corrected by later decisions. This iterative approach is not particularly new, but it was nice to see it clearly laid out.
Chris Stevens compared Titus in P32 and Sinaiticus, showing that the two are almost completely identical. He also used Sinaiticus to reconstruct the lacunae in P32 (rather than the NA text), further showing how close they are based on the near-perfect fit. I personally am a big fan of comparing the early witnesses to each other directly, rather than through intermediary textual witnesses or editions, so I appreciated that part of his paper.
Michael Kruger reevaluated P.Antinoopolis 12 (0232), a miniature codex containing 2 John. He suggested a 5th century date based on the physical features of the codex and the hand. He noted an error in the editio princeps, which led to a major error in the reconstruction of the codex. While admittedly speculative, he suggested that Hebrews may have been included in the codex along with the Catholic epistles, which would fill up the requisite amount of text indicated by the page numbers on the fragment.
Tomas Bokedal reviewed the history of the study of the nomina sacra, suggesting that Jesus was the primary member of the group of five reflecting an early core. He suggests the other names were chosen to line up with Christological creeds, indicating titles attributed to Jesus. This, of course, would imply a Christian origin for the nomina sacra.
Eric Mitchell presented on an unpublished fragment of Deuteronomy located at the Southwestern Baptist Theological Seminary. I missed the first part of the presentation, but if I understand correctly, it is a late 1st century BCE fragment from Qumran with regular morphologically long 2mp suffixes, but only one (semi-)meaningful difference from the MT. Unfortunately, the fragment reads the broken word י]בחר, so I don't know if it is possible to tell whether it read the perfect (SP) or imperfect (MT) in that important ideological difference, though Eric reconstructed with the MT.
I caught the last part of Peter Gurry's paper on the textual variants in the divorce passages in the Gospels in light of the Coherence-Based Genealogical Method. Among other things, he suggested that genealogical coherence suggests a preference for a longer reading in Matt. 19:9.
Nicholas Perrin argued against Watson that P. Egerton 2 does not witness to a pre-Johannine source, but is rather secondary. Among other arguments, he suggests that key stylistic features of the common text are part of broader themes in John that cannot be explained on the basis of P. Egerton 2 alone.
David Yoon looked at the use of ekthesis (putting the first letter of a line in the margin for visual prominence) in Galatians in Sinaiticus, suggesting that the text segments divided by ekthesis cannot be identified as paragraphs according to modern understandings, since they occur too frequently and sometimes even mid-sentence. He did not come to a definitive conclusion as to what exactly was the function of the scribal practice.
Russell Fuller and Richard McDonald argued that the study and teaching of Hebrew should be based on the use of Arabic grammatical categories, since it is the closest living language with a long history of grammatical analysis. They suggested that modern linguistic approaches have led to more confusion than insight, and that the use of native Semitic grammatical analyses better explains many phenomena. I must admit that the idea that we even need a paradigm language seems to me unnecessarily limiting. Neither does native Arabic grammar seem to me necessarily to be the best tool for studying Hebrew grammar. Nevertheless, it was a good reminder that we stand in a long line of grammatical tradition, and we would do well not to neglect the study of earlier grammarians and cognate languages.
Benjamin Giffone presented a theological paper on the problem for Evangelical bibliologies of defining a single, definitive text in a tradition that was repeatedly edited. He suggested that it is all but inevitable to have to appeal to "Catholic" arguments from community determination. He raised many insightful, probing theological questions, but unfortunately had no particular answer to give.
Eric Tully presented a helpful model for distinguishing between textual variants in a source language text and translation shifts in a target language text. He suggested gradually accumulating a database of tentative conclusions on individual readings, which can then inform later decisions or be corrected by later decisions. This iterative approach is not particularly new, but it was nice to see it clearly laid out.
Chris Stevens compared Titus in P32 and Sinaiticus, showing that the two are almost completely identical. He also used Sinaiticus to reconstruct the lacunae in P32 (rather than the NA text), further showing how close they are based on the near-perfect fit. I personally am a big fan of comparing the early witnesses to each other directly, rather than through intermediary textual witnesses or editions, so I appreciated that part of his paper.
Michael Kruger reevaluated P.Antinoopolis 12 (0232), a miniature codex containing 2 John. He suggested a 5th century date based on the physical features of the codex and the hand. He noted an error in the editio princeps, which led to a major error in the reconstruction of the codex. While admittedly speculative, he suggested that Hebrews may have been included in the codex along with the Catholic epistles, which would fill up the requisite amount of text indicated by the page numbers on the fragment.
Tomas Bokedal reviewed the history of the study of the nomina sacra, suggesting that Jesus was the primary member of the group of five reflecting an early core. He suggests the other names were chosen to line up with Christological creeds, indicating titles attributed to Jesus. This, of course, would imply a Christian origin for the nomina sacra.
Eric Mitchell presented on an unpublished fragment of Deuteronomy located at the Southwestern Baptist Theological Seminary. I missed the first part of the presentation, but if I understand correctly, it is a late 1st century BCE fragment from Qumran with regular morphologically long 2mp suffixes, but only one (semi-)meaningful difference from the MT. Unfortunately, the fragment reads the broken word י]בחר, so I don't know if it is possible to tell whether it read the perfect (SP) or imperfect (MT) in that important ideological difference, though Eric reconstructed with the MT.
I caught the last part of Peter Gurry's paper on the textual variants in the divorce passages in the Gospels in light of the Coherence-Based Genealogical Method. Among other things, he suggested that genealogical coherence suggests a preference for a longer reading in Matt. 19:9.
Nicholas Perrin argued against Watson that P. Egerton 2 does not witness to a pre-Johannine source, but is rather secondary. Among other arguments, he suggests that key stylistic features of the common text are part of broader themes in John that cannot be explained on the basis of P. Egerton 2 alone.
David Yoon looked at the use of ekthesis (putting the first letter of a line in the margin for visual prominence) in Galatians in Sinaiticus, suggesting that the text segments divided by ekthesis cannot be identified as paragraphs according to modern understandings, since they occur too frequently and sometimes even mid-sentence. He did not come to a definitive conclusion as to what exactly was the function of the scribal practice.
Saturday, May 23, 2015
Textual Communities Workshop, KU Leuven 11 and 12 June 2015
I received the following announcement from Peter Robinson, which may be of interest to some. For Old Testament scholars who may not know, Peter Robinson has a long-standing project editing the Canterbury Tales and is one of the leaders of the use of the digital humanities for the creation of critical editions. For those interested in learning the platform he has created for making critical editions, this would be a great opportunity.
Textual Communities Workshop, KU Leuven 11 and 12 June 2015
Textual Communities Workshop, KU Leuven 11 and 12 June 2015
Museumzaal (MSI 02.08, Erasmusplein 2, 3000 Leuven)
This workshop will serve three overlapping purposes.
First, it will introduce the Textual Communities system for creating scholarly editions in digital form. Textual Communities allows scholars and scholarly groups to make highest-quality editions in digital form, with minimal specialist computing knowledge and support. It is particularly suited to the making of editions which do not fit the pattern of “digital documentary editions”: that is, editions of works in many manuscripts or versions, or editions of non-authorial manuscripts. Accordingly, Textual Communities includes tools for handling images, page-by-page transcription, collation of multiple versions, project management, and more. See the draft article describing Textual Communities at https://www.academia.edu/12297061/Some_principles_for_the_making_of_collaborative_scholarly_editions_in_digital_form.
Second, it will offer training to transcribers joining the Canterbury Tales project, and to scholars leading transcription teams within the project. The project is undertaking the transcription of all 30,000 pages of the 88 pre-1500 witnesses of the Tales (18000 pages already transcribed but requiring checking; 12000 needing new transcription). Participants will be given accounts within the Textual Communities implementation of the Canterbury Tales project, introduced to the transcription system, and undertake their first transcriptions of pages from the Tales. See http://www.textualcommunities.usask.ca/web/canterbury-tales/wiki/-/wiki/Main/Becoming+a+transcriber.
Third, it will offer an introduction to the principles of manuscript transcription for digital editions to any scholars or students considering undertaking a digital edition project based on a manuscript. The materials of the Canterbury Tales project will be used as a starting point for discussion of transcription, supplemented by reference to other textual traditions on which the workshop leaders have worked (including Dante, medieval Spanish and New Testament Greek).
This workshop will be useful to scholars undertaking a wide range of digital edition projects, especially of works existing in multiple witnesses. Because both the architect of Textual Communities (Robinson) and its chief programmer (Xiaohan Zhang) will be present, it will be useful also for technical consultants who plan to work with the Textual Communities API. And, of course, it will be useful for transcribers joining the Canterbury Tales project.
There is no charge for this workshop, but places will be limited. Please contact Barbara Bordalejo barbara.bordalejo@kuleuven.be or Peter Robinson peter.robinson@usask.ca to confirm attendance. For accommodation, see http://www.leuven.be/en/tourism/staying/index.jsp.
Subscribe to:
Posts (Atom)