Skip to main content
Outdated Browser

For the best experience using our website, we recommend upgrading your browser to a newer version or switching to a supported browser.

More Information

Translation in Transition at Barnard College: Frontiers and Futures of Translation

Even newer than translation studies is one of its emerging arms and its more-likely-than-not future: the machine in translation. How will and how is the digital age changing the act of translation? All of the presenters at “Frontiers and Futures of Translation: The Machine Age, the Age of the Digital Humanities,” part of Barnard College’s fifth annual translation conference, were preoccupied with the machine in translation in some way. Notably, every presenter articulated that computational processes in translation are an expression of a fundamental quality of translation that already exists. Perhaps Audrey Lorberfeld of the University of Washington said it best in her “Exploration of Bibliographic Relationships of Translated Documents”: “Data is only informative in that it can be related to something else.” Of course, this is an assumption of translation studies, so patently relevant and true that it sometimes doesn’t bear mentioning, and all of Friday’s panelists explored it in their own illuminating way. By presenting a new bibliographic model— a model that classifies translations as more discrete entities, cleaved from their “original” text—Loberfeld investigated how those relations could be restructured to have a lasting impact on how people search for and, ultimately, consider texts in translation. This model, she claimed, allows translators and translations to “occupy the space between the signifier/signified.”

In her presentation “The History of News Translation and its Place in our Discipline”, Mairi McLaughlin provided a helpful retrospective frame for the entire panel by posing a seemingly simple question: Who is the translator? Importantly, McLaughlin stressed the need for more research into the “invisibility” of the historical translator and deconstructed the translator’s image by citing the “individualistic conception of authorship.” In their own way, the first two presenters sought the invisibility of the translator figure. John Cayley, a digital language artist and professor of Literary Arts at Brown University (whose extremely interesting work this blogger did her best to comprehend), walked us through his idea that the “work is embodied in the apparatus.” Cayley showed us a fascination with appropriation as it pertains to the apparatus and translation, asking translators and attendees: What to do with a text produced by a procedure? Is the procedural element translatable? His elucidating examples included the beautiful problem of the procedural translation of Lisa Robertson’s Cinema of the Present, a work that depends on its arbitrary ordering and re-ordering of entire lines. Essential to Cayley’s thought was the conclusion that computational translation is not lesser, and that “dissonance” in translation studies is not a pejorative term. In “Translated Texts in Digital Spaces: Collaborative Translation and the Challenges to Translation Theory,” Miguel A. Jiménez-Crespo of Rutgers University followed the thread, positing that the invisibility of the translator is the future, that the digital age will enfold the previously “isolated” translator into an online community. Jiménez-Crespo lauded the burgeoning crowdsourcing movement in translation, exploring its nuances by presenting several different existing examples of online platforms that depend on a community of volunteers to translate content in real time—Facebook, Twitter, TED, Asia Online—all, obviously, with a fundamental social aspect. All of his cited models are fundamentally participatory, a quality reinforced by the increasing digitization of the world’s communities, and appeal to users in order to gather “human” translations. The “challenge” these digital communities present, as Jiménez-Crespo demonstrated, is to the stability, or fixedness, of both the source and translated text at the same time it affords a new and potentially far better pathways to language localization.   

English

Even newer than translation studies is one of its emerging arms and its more-likely-than-not future: the machine in translation. How will and how is the digital age changing the act of translation? All of the presenters at “Frontiers and Futures of Translation: The Machine Age, the Age of the Digital Humanities,” part of Barnard College’s fifth annual translation conference, were preoccupied with the machine in translation in some way. Notably, every presenter articulated that computational processes in translation are an expression of a fundamental quality of translation that already exists. Perhaps Audrey Lorberfeld of the University of Washington said it best in her “Exploration of Bibliographic Relationships of Translated Documents”: “Data is only informative in that it can be related to something else.” Of course, this is an assumption of translation studies, so patently relevant and true that it sometimes doesn’t bear mentioning, and all of Friday’s panelists explored it in their own illuminating way. By presenting a new bibliographic model— a model that classifies translations as more discrete entities, cleaved from their “original” text—Loberfeld investigated how those relations could be restructured to have a lasting impact on how people search for and, ultimately, consider texts in translation. This model, she claimed, allows translators and translations to “occupy the space between the signifier/signified.”

In her presentation “The History of News Translation and its Place in our Discipline”, Mairi McLaughlin provided a helpful retrospective frame for the entire panel by posing a seemingly simple question: Who is the translator? Importantly, McLaughlin stressed the need for more research into the “invisibility” of the historical translator and deconstructed the translator’s image by citing the “individualistic conception of authorship.” In their own way, the first two presenters sought the invisibility of the translator figure. John Cayley, a digital language artist and professor of Literary Arts at Brown University (whose extremely interesting work this blogger did her best to comprehend), walked us through his idea that the “work is embodied in the apparatus.” Cayley showed us a fascination with appropriation as it pertains to the apparatus and translation, asking translators and attendees: What to do with a text produced by a procedure? Is the procedural element translatable? His elucidating examples included the beautiful problem of the procedural translation of Lisa Robertson’s Cinema of the Present, a work that depends on its arbitrary ordering and re-ordering of entire lines. Essential to Cayley’s thought was the conclusion that computational translation is not lesser, and that “dissonance” in translation studies is not a pejorative term. In “Translated Texts in Digital Spaces: Collaborative Translation and the Challenges to Translation Theory,” Miguel A. Jiménez-Crespo of Rutgers University followed the thread, positing that the invisibility of the translator is the future, that the digital age will enfold the previously “isolated” translator into an online community. Jiménez-Crespo lauded the burgeoning crowdsourcing movement in translation, exploring its nuances by presenting several different existing examples of online platforms that depend on a community of volunteers to translate content in real time—Facebook, Twitter, TED, Asia Online—all, obviously, with a fundamental social aspect. All of his cited models are fundamentally participatory, a quality reinforced by the increasing digitization of the world’s communities, and appeal to users in order to gather “human” translations. The “challenge” these digital communities present, as Jiménez-Crespo demonstrated, is to the stability, or fixedness, of both the source and translated text at the same time it affords a new and potentially far better pathways to language localization.