Skip to content

Instantly share code, notes, and snippets.

@tra38
Created November 30, 2015 03:23
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save tra38/fb081ddcc1a662273d4a to your computer and use it in GitHub Desktop.
Save tra38/fb081ddcc1a662273d4a to your computer and use it in GitHub Desktop.
Rearranging the paragraphs of "NaNoGenMo: Dada 2.0"

Story 0

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Story 1

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Story 2

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Story 3

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Story 4

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Story 5

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Story 6

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Story 7

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Story 8

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Story 9

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Story 10

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Story 11

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Story 12

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Story 13

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Story 14

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Story 15

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Story 16

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Story 17

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Story 18

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Story 19

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Story 20

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Story 21

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Story 22

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Story 23

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Story 24

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Story 25

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Story 26

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Story 27

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Story 28

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Story 29

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Story 30

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Story 31

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Story 32

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Story 33

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Story 34

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Story 35

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Story 36

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Story 37

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Story 38

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Story 39

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Story 40

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Story 41

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Story 42

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Story 43

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Story 44

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Story 45

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Story 46

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Story 47

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Story 48

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

Story 49

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments.

Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

What if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

It is this search to use automation as a vehicle for defamiliarization that makes NaNoGenMo so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment