94:(NLU): whereas in natural-language understanding, the system needs to disambiguate the input sentence to produce the machine representation language, in NLG the system needs to make decisions about how to put a representation into words. The practical considerations in building NLU vs. NLG systems are not symmetrical. NLU needs to deal with ambiguous or erroneous user input, whereas the ideas the system wants to express through NLG are generally known precisely. NLG needs to choose a specific, self-consistent textual representation from many potential representations, whereas NLU generally tries to produce a single, normalized representation of the idea expressed.
445:(IR) techniques. Modern chatbot systems predominantly rely on machine learning (ML) models, such as sequence-to-sequence learning and reinforcement learning to generate natural language output. Hybrid models have also been explored. For example, the Alibaba shopping assistant first uses an IR approach to retrieve the best candidates from the knowledge base, then uses the ML-driven seq2seq model re-rank the candidate responses and generate the answer.
467:
generation. Some have argued relative to other applications, there has been a lack of attention to creative aspects of language production within NLG. NLG researchers stand to benefit from insights into what constitutes creative language production, as well as structural features of narrative that have the potential to improve NLG output even in data-to-text systems.
523:
other words, human ratings usually do predict task-effectiveness at least to some degree (although there are exceptions), while ratings produced by metrics often do not predict task-effectiveness well. These results are preliminary. In any case, human ratings are the most popular evaluation technique in NLG; this is contrast to
486:: give the generated text to a person, and assess how well it helps them perform a task (or otherwise achieves its communicative goal). For example, a system which generates summaries of medical data can be evaluated by giving these summaries to doctors, and assessing whether the summaries help doctors make better decisions.
317:
The first commercial data-to-text systems produced weather forecasts from weather data. The earliest such system to be deployed was FoG, which was used by
Environment Canada to generate weather forecasts in French and English in the early 1990s. The success of FoG triggered other work, both research
55:
While it is widely agreed that the output of any NLG process is text, there is some disagreement about whether the inputs of an NLG system need to be non-linguistic. Common applications of NLG methods include the production of various reports, for example weather and patient reports; image captions;
325:
Data-to-text systems have since been applied in a range of settings. Following the minor earthquake near
Beverly Hills, California on March 17, 2014, The Los Angeles Times reported details about the time, location and strength of the quake within 3 minutes of the event. This report was automatically
518:
An ultimate goal is how useful NLG systems are at helping people, which is the first of the above techniques. However, task-based evaluations are time-consuming and expensive, and can be difficult to carry out (especially if they require subjects with specialised expertise, such as doctors). Hence
382:
such as AlexNet, VGG or Caffe, where caption generators use an activation layer from the pre-trained network as their input features. Text
Generation, the second task, is performed using a wide range of techniques. For example, in the Midge system, input images are represented as triples consisting
457:
A related area of NLG application is computational humor production. JAPE (Joke
Analysis and Production Engine) is one of the earliest large, automated humor production systems that uses a hand-coded template-based approach to create punning riddles for children. HAHAcronym creates humorous
522:
Recently researchers are assessing how well human-ratings and metrics correlate with (predict) task-based evaluations. Work is being conducted in the context of
Generation Challenges shared-task events. Initial results suggest that human ratings are much better than metrics in this regard. In
356:
Looking ahead, the current progress in data-to-text generation paves the way for tailoring texts to specific audiences. For example, data from babies in neonatal care can be converted into text differently in a clinical setting, with different levels of technical detail and explanatory language,
159:
The process to generate text can be as simple as keeping a list of canned text that is copied and pasted, possibly linked with some glue text. The results may be satisfactory in simple domains such as horoscope machines or generators of personalized business letters. However, a sophisticated NLG
453:
Creative language generation by NLG has been hypothesized since the field's origins. A recent pioneer in the area is
Phillip Parker, who has developed an arsenal of algorithms capable of automatically generating textbooks, crossword puzzles, poems and books on topics ranging from bookbinding to
369:
for images, as part of a broader endeavor to investigate the interface between vision and language. A case of data-to-text generation, the algorithm of image captioning (or automatic image description) involves taking an image, analyzing its visual content, and generating a textual description
129:
system is a simple example of a simple NLG system that could essentially be based on a template. This system takes as input six numbers, which give predicted pollen levels in different parts of
Scotland. From these numbers, the system generates a short textual summary of pollen levels as its
466:
were 38.4%) and a GPT-2 model fine-tuned on satirical headlines achieved 6.9%. It has been pointed out that two main issues with humor-generation systems are the lack of annotated data sets and the lack of formal evaluation methods, which could be applicable to other creative content
51:
output. A widely-cited survey of NLG methods describes NLG as "the subfield of artificial intelligence and computational linguistics that is concerned with the construction of computer systems that can produce understandable texts in
English or other human languages from some underlying
461:
Despite progresses, many challenges remain in producing automated creative and humorous content that rival human output. In an experiment for generating satirical headlines, outputs of their best BERT-based model were perceived as funny 9.4% of the time (while real headlines from
313:
as well as text generation. Research has shown that textual summaries can be more effective than graphs and other visuals for decision support, and that computer-generated texts can be superior (from the reader's perspective) to human-written texts.
160:
system needs to include stages of planning and merging of information to enable the generation of text that looks natural and does not become repetitive. The typical stages of natural-language generation, as proposed by Dale and Reiter, are:
137:
Grass pollen levels for Friday have increased from the moderate to high levels of yesterday with values of around 6 to 7 across most parts of the country. However, in
Northern areas, pollen levels will be moderate with values of 4.
146:
Pollen counts are expected to remain high at level 6 over most of
Scotland, and even level 7 in the south east. The only relief is in the Northern Isles and far northeast of mainland Scotland with medium levels of pollen count.
279:
An alternative approach to NLG is to use "end-to-end" machine learning to build a system, without having separate stages as above. In other words, we build an NLG system by training a machine learning algorithm (often an
326:
generated by a 'robo-journalist', which converted the incoming data into text via a preset template. Currently there is considerable commercial interest in using NLG to summarise financial and business data. Indeed,
924:
Law A, Freer Y, Hunter J, Logie R, McIntosh N, Quinn J (2005). "A Comparison of Graphical and Textual Presentations of Time Series Data to Support Medical Decision Making in the Neonatal Intensive Care Unit".
373:
An image captioning system involves two sub-tasks. In Image Analysis, features and attributes of an image are detected and labelled, before mapping these outputs to linguistic structures. Recent research
454:
cataracts. The advent of large pretrained transformer-based language models such as GPT-3 has also enabled breakthroughs, with such models demonstrating recognizable ability for creating-writing tasks.
599:
353:
and allows users to see and manipulate the continuously rendered view (NLG output) of an underlying formal language document (NLG input), thereby editing the formal language without learning it.
357:
depending on intended recipient of the text (doctor, nurse, patient). The same idea can be applied in a sports setting, with different reports generated for fans of specific teams.
1571:
398:
Designing automatic measures that can mimic human judgments in evaluating the suitability of image descriptions is another need in the area. Other open challenges include visual
87:. Human languages tend to be considerably more complex and allow for much more ambiguity and variety of expression than programming languages, which makes NLG more challenging.
1731:
168:: Deciding what information to mention in the text. For instance, in the pollen example above, deciding whether to explicitly mention that pollen level is 7 in the southeast.
390:
Despite advancements, challenges and opportunities remain in image capturing research. Notwithstanding the recent introduction of Flickr30K, MS COCO and other large datasets
176:: Overall organisation of the information to convey. For example, deciding to describe the areas with high pollen levels first, instead of the areas with low pollen levels.
441:
created by Rollo Carpenter in 1988 and published in 1997, reply to questions by identifying how a human has responded to the same question in a conversation database using
1097:
1709:
71:
for this process, which can also be described in mathematical terms, or modeled in a computer for psychological research. NLG systems can also be compared to
544:. In Natural Language Processing, a hallucination is often defined as "generated content that is nonsensical or unfaithful to the provided source content".
387:
detections and spatial relations. These are subsequently mapped to <noun, verb, preposition> triples and realized using a tree substitution grammar.
101:
was developed in the mid 1960s, but the methods were first used commercially in the 1990s. NLG techniques range from simple template-based systems like a
205:
Grass pollen levels for Friday have increased from the moderate to high levels of yesterday with values of around 6 to 7 across most parts of the country
2120:
1564:
1409:
Ji, Ziwei; Lee, Nayeon; Frieske, Rita; Yu, Tiezheng; Su, Dan; Xu, Yan; Ishii, Etsuko; Bang, Yejin; Madotto, Andrea; Fung, Pascale (17 November 2022).
1201:
Gatt, Albert; Krahmer, Emiel (2018-01-29). "Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation".
1136:
434:(NLP) techniques are applied in deciphering human input, NLG informs the output part of the chatbot algorithms in facilitating real-time dialogues.
2289:
540:
284:) on a large data set of input data and corresponding (human-written) output texts. The end-to-end approach has perhaps been most successful in
1473:
1073:
807:
2320:
2030:
1721:
1557:
2284:
1891:
330:
has said that NLG will become a standard feature of 90% of modern BI and analytics platforms. NLG is also being used commercially in
2045:
1876:
844:
Perera R, Nand P (2017). "Recent Advances in Natural Language Generation: A Survey and Classification of the Empirical Literature".
645:
Gatt A, Krahmer E (2018). "Survey of the state of the art in natural language generation: Core tasks, applications and evaluation".
524:
109:, to systems that have a complex understanding of human grammar. NLG can also be accomplished by training a statistical model using
538:. A response that reflects the training data but not reality is faithful but not factual. A confident but unfaithful response is a
1112:
475:
As in other scientific fields, NLG researchers need to test how well their systems, modules, and algorithms work. This is called
1816:
2233:
1886:
228:
1881:
1626:
91:
1293:
2150:
1871:
1222:
Kodali, Venkat; Berleant, Daniel (2022). "Recent, Rapid Advancement in Visual Question Answering Architecture: a Review".
184:: Merging of similar sentences to improve readability and naturalness. For instance, merging the two following sentences:
1843:
379:
2188:
2173:
2010:
2005:
1580:
1247:
Mnasri, Maali (2019-03-21). "Recent advances in conversational NLP: Towards the standardization of Chatbot building".
1072:
Schwencke, Ken Schwencke Ken; Journalist, A.; Programmer, Computer; in 2014, left the Los Angeles Times (2014-03-17).
431:
84:
1172:
1925:
1896:
1674:
1058:
Generating A Case Study: NLG meeting Weather Industry Demand for Quality and Quantity of Textual Weather Forecasts.
366:
285:
151:
Comparing these two illustrates some of the choices that NLG systems must make; these are further discussed below.
1768:
1621:
498:: compare generated texts to texts written by people from the same input data, using an automatic metric such as
252:
180:
2294:
2218:
1950:
1906:
1791:
1689:
261:
693:
Goldberg E, Driedger N, Kittredge R (1994). "Using Natural-Language Processing to Produce Weather Forecasts".
2198:
2168:
1835:
969:
306:
2055:
1748:
1726:
1716:
1684:
1659:
568:
563:
384:
281:
245:
72:
1915:
1497:"Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation"
768:
442:
164:
458:
reinterpretations of any given acronym, as well as proposing new fitting acronyms given some keywords.
2268:
1944:
1920:
1773:
558:
331:
233:
172:
63:
Automated NLG can be compared to the process humans use when they turn ideas into writing or speech.
2248:
2178:
2135:
2091:
1863:
1853:
1848:
1736:
573:
492:: give the generated text to a person, and ask them to rate the quality and usefulness of the text.
402:(VQA), as well as the construction and evaluation multilingual repositories for image description.
338:, generating product descriptions for e-commerce sites, summarising medical records, and enhancing
68:
1489:
734:
2258:
2130:
1995:
1758:
1741:
1599:
1526:
1508:
1442:
1422:
1380:
1343:
1248:
1227:
1202:
992:
950:
830:
710:
672:
654:
627:
399:
31:
767:
Farhadi A, Hejrati M, Sadeghi MA, Young P, Rashtchian C, Hockenmaier J, Forsyth D (2010-09-05).
1268:
2263:
1975:
1783:
1694:
1469:
1091:
942:
803:
619:
35:
394:
enabled the training of more complex models such as neural networks, it has been argued that
2140:
2025:
2000:
1801:
1704:
1518:
1432:
1370:
1333:
1037:
984:
934:
861:
853:
777:
749:
702:
664:
611:
427:
110:
48:
189:
Grass pollen levels for Friday have increased from the moderate to high levels of yesterday
30:
This article is about the computer processing ability. For the psychological concepts, see
17:
2252:
2213:
2208:
2076:
1806:
1679:
1654:
1636:
346:
64:
1496:
824:
240:
to refer to a certain region in Scotland. This task also includes making decisions about
776:. European conference on computer vision. Berlin, Heidelberg: Springer. pp. 15โ29.
142:
In contrast, the actual forecast (written by a human meteorologist) from this data was:
1960:
1940:
1664:
1397:
1321:
578:
553:
507:
419:
212:
1010:
2314:
2223:
2035:
2015:
1796:
1539:
1446:
1384:
1347:
1057:
339:
310:
1530:
910:
676:
2203:
1375:
996:
954:
714:
631:
519:(as in other areas of NLP) task-based evaluations are the exception, not the norm.
423:
1042:
1025:
753:
2160:
2040:
1753:
1669:
1646:
1594:
781:
396:
research in image captioning could benefit from larger and diversified datasets.
370:(typically a sentence) that verbalizes the most prominent aspects of the image.
265:
133:
For example, using the historical data for July 1, 2005, the software produces:
114:
106:
1151:
896:
1763:
1549:
1338:
938:
615:
319:
301:
From a commercial perspective, the most successful NLG applications have been
256:: Creating the actual text, which should be correct according to the rules of
102:
80:
76:
1159:
Proceedings of the Fifth International Natural Language Generation Conference
1113:"L.A. Times Journalist Explains How a Bot Wrote His Earthquake Story for Him"
988:
735:"Automatic Generation of Textual Summaries from Neonatal Intensive Care Data"
623:
1631:
1362:
463:
438:
946:
881:
1180:
733:
Portet F, Reiter E, Gatt A, Hunter J, Sripada S, Freer Y, Sykes C (2009).
195:
Grass pollen levels will be around 6 to 7 across most parts of the country
2106:
2086:
2071:
2050:
2020:
1965:
1930:
1811:
857:
411:
241:
1137:"Neural Networks and Modern BI Platforms Will Evolve Data and Analytics"
2243:
2101:
2081:
1955:
1699:
1614:
866:
415:
335:
327:
57:
1522:
706:
668:
1609:
1604:
503:
430:, in lieu of providing direct contact with a live human agent. While
257:
1437:
1410:
1367:
Proceedings of the Second Workshop on Figurative Language Processing
970:"Data-to-Text Generation Improves Decision-Making Under Uncertainty"
1513:
1427:
1253:
1232:
1207:
659:
288:, that is automatically generating a textual caption for an image.
2299:
1935:
511:
342:(for example by describing graphs and data sets to blind people).
98:
365:
Over the past few years, there has been an increased interest in
236:
that identify objects and regions. For example, deciding to use
1821:
499:
216:: Putting words to the concepts. For example, deciding whether
1553:
479:. There are three basic techniques for evaluating NLG systems:
2096:
770:
Every picture tells a story: Generating sentences from images
410:
Another area where NLG has been widely applied is automated
378:
deep learning approaches through features from a pre-trained
1369:. Online: Association for Computational Linguistics: 40โ50.
1361:
Horvitz, Zachary; Do, Nam; Littman, Michael L. (July 2020).
1294:"Exploring GPT-3: A New Breakthrough in Language Generation"
1224:
Proceedings of the 22nd IEEE International Conference on EIT
882:
Generating Spatio-Temporal Descriptions in Pollen Forecasts.
238:
in the Northern Isles and far northeast of mainland Scotland
83:, which also produce human-readable code generated from an
309:
of databases and data sets; these systems usually perform
1152:"Building a Large-Scale Commercial NLG System for an EMR"
1056:
S Sripada, N Burnett, R Turner, J Mastin, D Evans(2014).
1411:"Survey of Hallucination in Natural Language Generation"
1074:"Earthquake aftershock: 2.7 quake strikes near Westwood"
1026:"Choosing Words in Computer-Generated Weather Forecasts"
600:"Building applied natural language generation systems"
1024:
Reiter E, Sripada S, Hunter J, Yu J, Davy I (2005).
224:
should be used when describing a pollen level of 4.
2277:
2232:
2187:
2159:
2119:
2064:
1986:
1974:
1905:
1862:
1834:
1782:
1645:
1587:
1483:Evans, Roger; Piwek, Paul; Cahill, Lynne (2002).
318:and commercial. Recent applications include the
802:. Cambridge, U.K.: Cambridge University Press.
414:systems, frequently in the form of chatbots. A
345:An example of an interactive use of NLG is the
52:non-linguistic representation of information".
880:R Turner, S Sripada, E Reiter, I Davy (2006).
1565:
1468:. Cambridge, UK: Cambridge University Press.
8:
1466:Building natural language generation systems
927:Journal of Clinical Monitoring and Computing
800:Building natural language generation systems
422:application used to conduct an on-line chat
1501:Journal of Artificial Intelligence Research
1096:: CS1 maint: numeric names: authors list (
647:Journal of Artificial Intelligence Research
534:to its training data or, alternatively, on
1983:
1779:
1572:
1558:
1550:
1363:"Context-Driven Satirical News Generation"
75:of artificial computer languages, such as
1512:
1436:
1426:
1374:
1337:
1252:
1231:
1206:
1041:
865:
658:
598:Reiter, Ehud; Dale, Robert (March 1997).
977:IEEE Computational Intelligence Magazine
728:
726:
724:
449:Creative writing and computational humor
590:
27:Generation of text in natural languages
1089:
793:
791:
90:NLG may be viewed as complementary to
47:) is a software process that produces
1495:Gatt, Albert; Krahmer, Emiel (2018).
1322:"Computers Learning Humor Is No Joke"
1315:
1313:
1196:
1194:
1192:
1190:
968:Gkatzia D, Lemon O, Reiser V (2017).
688:
686:
7:
2031:Simple Knowledge Organization System
1269:"How To Author Over 1 Million Books"
200:into the following single sentence:
1464:Dale, Robert; Reiter, Ehud (2000).
798:Dale, Robert; Reiter, Ehud (2000).
383:of object/stuff detections, action/
25:
2046:Thesaurus (information retrieval)
1173:"Welcome to the iGraph-Lite page"
527:, where metrics are widely used.
484:Task-based (extrinsic) evaluation
437:Early chatbot systems, including
367:automatically generating captions
833:from the original on 2021-12-12.
579:Generative art ยง Literature
229:Referring expression generation
1627:Natural language understanding
1320:Winters, Thomas (2021-04-30).
351:What you see is what you meant
92:natural-language understanding
1:
2151:Optical character recognition
1111:Levenson, Eric (2014-03-17).
1844:Multi-document summarization
1376:10.18653/v1/2020.figlang-1.5
1043:10.1016/j.artint.2005.06.006
754:10.1016/j.artint.2008.12.002
604:Natural Language Engineering
380:convolutional neural network
127:Pollen Forecast for Scotland
2321:Natural language generation
2174:Latent Dirichlet allocation
2146:Natural language generation
2011:Machine-readable dictionary
2006:Linguistic Linked Open Data
1581:Natural language processing
1540:"How do I Learn about NLG?"
1538:Reiter, Ehud (2018-01-16).
1326:Harvard Data Science Review
911:"DataLabCup: Image Caption"
782:10.1007/978-3-642-15561-1_2
432:natural language processing
297:Automatic report generation
85:intermediate representation
41:Natural language generation
18:Natural-language generation
2337:
1926:Explicit semantic analysis
1675:Deep linguistic processing
823:Ehud Reiter (2021-03-21).
307:generate textual summaries
29:
1769:Word-sense disambiguation
1622:Computational linguistics
1487:. INLG2002. New York, US.
1339:10.1162/99608f92.f13a2337
939:10.1007/s10877-005-0879-3
846:Computing and Informatics
616:10.1017/S1351324997001502
349:framework. It stands for
2295:Natural Language Toolkit
2219:Pronunciation assessment
2121:Automatic identification
1951:Latent semantic analysis
1907:Distributional semantics
1792:Compound-term processing
1690:Named-entity recognition
1061:Proceedings of INLG 2014
989:10.1109/MCI.2017.2708998
322:text-enhanced forecast.
272:for the future tense of
117:of human-written texts.
2199:Automated essay scoring
2169:Document classification
1836:Automatic summarization
1030:Artificial Intelligence
742:Artificial Intelligence
530:An AI can be graded on
113:, typically on a large
2056:Universal Dependencies
1749:Terminology extraction
1732:Semantic decomposition
1727:Semantic role labeling
1717:Part-of-speech tagging
1685:Information extraction
1670:Coreference resolution
1660:Collocation extraction
569:Markov text generators
564:Automated paraphrasing
268:. For example, using
149:
140:
97:NLG has existed since
1817:Sentence segmentation
1415:ACM Computing Surveys
1398:Generation Challenges
885:Proceedings of EACL06
443:information retrieval
234:referring expressions
165:Content determination
144:
135:
2269:Voice user interface
1980:datasets and corpora
1921:Document-term matrix
1774:Word-sense induction
1226:. pp. 133โ146.
858:10.4149/cai_2017_1_1
559:Automated journalism
332:automated journalism
173:Document structuring
2249:Interactive fiction
2179:Pachinko allocation
2136:Speech segmentation
2092:Google Ngram Viewer
1864:Machine translation
1854:Text simplification
1849:Sentence extraction
1737:Semantic similarity
1011:"Text or Graphics?"
897:"E2E NLG Challenge"
574:Meaning-text theory
525:machine translation
418:or chatterbot is a
244:and other types of
69:language production
2259:Question answering
2131:Speech recognition
1996:Corpus linguistics
1976:Language resources
1759:Textual entailment
1742:Sentiment analysis
1161:. pp. 157โ60.
1150:Harris MD (2008).
400:question-answering
32:Origin of language
2308:
2307:
2264:Virtual assistant
2189:Computer-assisted
2115:
2114:
1872:Computer-assisted
1830:
1829:
1822:Word segmentation
1784:Text segmentation
1722:Semantic analysis
1710:Syntactic parsing
1695:Ontology learning
1523:10.1613/jair.5477
1475:978-0-521-02451-8
1078:Los Angeles Times
809:978-0-521-02451-8
707:10.1109/64.294135
669:10.1613/jair.5477
36:Universal grammar
16:(Redirected from
2328:
2285:Formal semantics
2234:Natural language
2141:Speech synthesis
2123:and data capture
2026:Semantic network
2001:Lexical resource
1984:
1802:Lexical analysis
1780:
1705:Semantic parsing
1574:
1567:
1560:
1551:
1543:
1534:
1516:
1488:
1479:
1451:
1450:
1440:
1430:
1406:
1400:
1395:
1389:
1388:
1378:
1358:
1352:
1351:
1341:
1317:
1308:
1307:
1305:
1304:
1290:
1284:
1283:
1281:
1280:
1265:
1259:
1258:
1256:
1244:
1238:
1237:
1235:
1219:
1213:
1212:
1210:
1198:
1185:
1184:
1179:. Archived from
1169:
1163:
1162:
1156:
1147:
1141:
1140:
1133:
1127:
1126:
1124:
1123:
1108:
1102:
1101:
1095:
1087:
1085:
1084:
1069:
1063:
1054:
1048:
1047:
1045:
1021:
1015:
1014:
1007:
1001:
1000:
974:
965:
959:
958:
921:
915:
914:
907:
901:
900:
893:
887:
878:
872:
871:
869:
841:
835:
834:
820:
814:
813:
795:
786:
785:
775:
764:
758:
757:
748:(7โ8): 789โ816.
739:
730:
719:
718:
690:
681:
680:
662:
642:
636:
635:
595:
361:Image captioning
286:image captioning
111:machine learning
67:prefer the term
49:natural language
21:
2336:
2335:
2331:
2330:
2329:
2327:
2326:
2325:
2311:
2310:
2309:
2304:
2273:
2253:Syntax guessing
2235:
2228:
2214:Predictive text
2209:Grammar checker
2190:
2183:
2155:
2122:
2111:
2077:Bank of English
2060:
1988:
1979:
1970:
1901:
1858:
1826:
1778:
1680:Distant reading
1655:Argument mining
1641:
1637:Text processing
1583:
1578:
1547:
1537:
1494:
1482:
1476:
1463:
1460:
1458:Further reading
1455:
1454:
1438:10.1145/3571730
1421:(12): 3571730.
1408:
1407:
1403:
1396:
1392:
1360:
1359:
1355:
1319:
1318:
1311:
1302:
1300:
1292:
1291:
1287:
1278:
1276:
1267:
1266:
1262:
1246:
1245:
1241:
1221:
1220:
1216:
1200:
1199:
1188:
1177:www.inf.udec.cl
1171:
1170:
1166:
1154:
1149:
1148:
1144:
1135:
1134:
1130:
1121:
1119:
1110:
1109:
1105:
1088:
1082:
1080:
1071:
1070:
1066:
1055:
1051:
1036:(1โ2): 137โ69.
1023:
1022:
1018:
1009:
1008:
1004:
972:
967:
966:
962:
923:
922:
918:
909:
908:
904:
895:
894:
890:
879:
875:
843:
842:
838:
822:
821:
817:
810:
797:
796:
789:
773:
766:
765:
761:
737:
732:
731:
722:
692:
691:
684:
644:
643:
639:
597:
596:
592:
587:
550:
473:
451:
408:
363:
320:UK Met Office's
299:
294:
157:
123:
105:that generates
65:Psycholinguists
38:
28:
23:
22:
15:
12:
11:
5:
2334:
2332:
2324:
2323:
2313:
2312:
2306:
2305:
2303:
2302:
2297:
2292:
2287:
2281:
2279:
2275:
2274:
2272:
2271:
2266:
2261:
2256:
2246:
2240:
2238:
2236:user interface
2230:
2229:
2227:
2226:
2221:
2216:
2211:
2206:
2201:
2195:
2193:
2185:
2184:
2182:
2181:
2176:
2171:
2165:
2163:
2157:
2156:
2154:
2153:
2148:
2143:
2138:
2133:
2127:
2125:
2117:
2116:
2113:
2112:
2110:
2109:
2104:
2099:
2094:
2089:
2084:
2079:
2074:
2068:
2066:
2062:
2061:
2059:
2058:
2053:
2048:
2043:
2038:
2033:
2028:
2023:
2018:
2013:
2008:
2003:
1998:
1992:
1990:
1981:
1972:
1971:
1969:
1968:
1963:
1961:Word embedding
1958:
1953:
1948:
1941:Language model
1938:
1933:
1928:
1923:
1918:
1912:
1910:
1903:
1902:
1900:
1899:
1894:
1892:Transfer-based
1889:
1884:
1879:
1874:
1868:
1866:
1860:
1859:
1857:
1856:
1851:
1846:
1840:
1838:
1832:
1831:
1828:
1827:
1825:
1824:
1819:
1814:
1809:
1804:
1799:
1794:
1788:
1786:
1777:
1776:
1771:
1766:
1761:
1756:
1751:
1745:
1744:
1739:
1734:
1729:
1724:
1719:
1714:
1713:
1712:
1707:
1697:
1692:
1687:
1682:
1677:
1672:
1667:
1665:Concept mining
1662:
1657:
1651:
1649:
1643:
1642:
1640:
1639:
1634:
1629:
1624:
1619:
1618:
1617:
1612:
1602:
1597:
1591:
1589:
1585:
1584:
1579:
1577:
1576:
1569:
1562:
1554:
1545:
1544:
1535:
1492:
1480:
1474:
1459:
1456:
1453:
1452:
1401:
1390:
1353:
1309:
1285:
1260:
1239:
1214:
1186:
1183:on 2010-03-16.
1164:
1142:
1128:
1103:
1064:
1049:
1016:
1002:
960:
916:
902:
888:
873:
836:
826:History of NLG
815:
808:
787:
759:
720:
682:
653:(61): 65โ170.
637:
589:
588:
586:
583:
582:
581:
576:
571:
566:
561:
556:
554:Autocompletion
549:
546:
516:
515:
493:
487:
472:
469:
450:
447:
428:text-to-speech
407:
404:
362:
359:
305:systems which
298:
295:
293:
290:
213:Lexical choice
209:
208:
198:
197:
192:
156:
153:
122:
119:
26:
24:
14:
13:
10:
9:
6:
4:
3:
2:
2333:
2322:
2319:
2318:
2316:
2301:
2298:
2296:
2293:
2291:
2290:Hallucination
2288:
2286:
2283:
2282:
2280:
2276:
2270:
2267:
2265:
2262:
2260:
2257:
2254:
2250:
2247:
2245:
2242:
2241:
2239:
2237:
2231:
2225:
2224:Spell checker
2222:
2220:
2217:
2215:
2212:
2210:
2207:
2205:
2202:
2200:
2197:
2196:
2194:
2192:
2186:
2180:
2177:
2175:
2172:
2170:
2167:
2166:
2164:
2162:
2158:
2152:
2149:
2147:
2144:
2142:
2139:
2137:
2134:
2132:
2129:
2128:
2126:
2124:
2118:
2108:
2105:
2103:
2100:
2098:
2095:
2093:
2090:
2088:
2085:
2083:
2080:
2078:
2075:
2073:
2070:
2069:
2067:
2063:
2057:
2054:
2052:
2049:
2047:
2044:
2042:
2039:
2037:
2036:Speech corpus
2034:
2032:
2029:
2027:
2024:
2022:
2019:
2017:
2016:Parallel text
2014:
2012:
2009:
2007:
2004:
2002:
1999:
1997:
1994:
1993:
1991:
1985:
1982:
1977:
1973:
1967:
1964:
1962:
1959:
1957:
1954:
1952:
1949:
1946:
1942:
1939:
1937:
1934:
1932:
1929:
1927:
1924:
1922:
1919:
1917:
1914:
1913:
1911:
1908:
1904:
1898:
1895:
1893:
1890:
1888:
1885:
1883:
1880:
1878:
1877:Example-based
1875:
1873:
1870:
1869:
1867:
1865:
1861:
1855:
1852:
1850:
1847:
1845:
1842:
1841:
1839:
1837:
1833:
1823:
1820:
1818:
1815:
1813:
1810:
1808:
1807:Text chunking
1805:
1803:
1800:
1798:
1797:Lemmatisation
1795:
1793:
1790:
1789:
1787:
1785:
1781:
1775:
1772:
1770:
1767:
1765:
1762:
1760:
1757:
1755:
1752:
1750:
1747:
1746:
1743:
1740:
1738:
1735:
1733:
1730:
1728:
1725:
1723:
1720:
1718:
1715:
1711:
1708:
1706:
1703:
1702:
1701:
1698:
1696:
1693:
1691:
1688:
1686:
1683:
1681:
1678:
1676:
1673:
1671:
1668:
1666:
1663:
1661:
1658:
1656:
1653:
1652:
1650:
1648:
1647:Text analysis
1644:
1638:
1635:
1633:
1630:
1628:
1625:
1623:
1620:
1616:
1613:
1611:
1608:
1607:
1606:
1603:
1601:
1598:
1596:
1593:
1592:
1590:
1588:General terms
1586:
1582:
1575:
1570:
1568:
1563:
1561:
1556:
1555:
1552:
1548:
1541:
1536:
1532:
1528:
1524:
1520:
1515:
1510:
1506:
1502:
1498:
1493:
1491:
1486:
1481:
1477:
1471:
1467:
1462:
1461:
1457:
1448:
1444:
1439:
1434:
1429:
1424:
1420:
1416:
1412:
1405:
1402:
1399:
1394:
1391:
1386:
1382:
1377:
1372:
1368:
1364:
1357:
1354:
1349:
1345:
1340:
1335:
1331:
1327:
1323:
1316:
1314:
1310:
1299:
1295:
1289:
1286:
1274:
1270:
1264:
1261:
1255:
1250:
1243:
1240:
1234:
1229:
1225:
1218:
1215:
1209:
1204:
1197:
1195:
1193:
1191:
1187:
1182:
1178:
1174:
1168:
1165:
1160:
1153:
1146:
1143:
1138:
1132:
1129:
1118:
1114:
1107:
1104:
1099:
1093:
1079:
1075:
1068:
1065:
1062:
1059:
1053:
1050:
1044:
1039:
1035:
1031:
1027:
1020:
1017:
1013:. 2016-12-26.
1012:
1006:
1003:
998:
994:
990:
986:
982:
978:
971:
964:
961:
956:
952:
948:
944:
940:
936:
933:(3): 183โ94.
932:
928:
920:
917:
912:
906:
903:
898:
892:
889:
886:
883:
877:
874:
868:
863:
859:
855:
851:
847:
840:
837:
832:
828:
827:
819:
816:
811:
805:
801:
794:
792:
788:
783:
779:
772:
771:
763:
760:
755:
751:
747:
743:
736:
729:
727:
725:
721:
716:
712:
708:
704:
700:
696:
689:
687:
683:
678:
674:
670:
666:
661:
656:
652:
648:
641:
638:
633:
629:
625:
621:
617:
613:
609:
605:
601:
594:
591:
584:
580:
577:
575:
572:
570:
567:
565:
562:
560:
557:
555:
552:
551:
547:
545:
543:
542:
541:hallucination
537:
533:
528:
526:
520:
513:
509:
505:
501:
497:
494:
491:
490:Human ratings
488:
485:
482:
481:
480:
478:
470:
468:
465:
459:
455:
448:
446:
444:
440:
435:
433:
429:
425:
421:
417:
413:
405:
403:
401:
397:
393:
388:
386:
381:
377:
371:
368:
360:
358:
354:
352:
348:
343:
341:
340:accessibility
337:
333:
329:
323:
321:
315:
312:
311:data analysis
308:
304:
296:
291:
289:
287:
283:
277:
275:
271:
267:
263:
259:
255:
254:
249:
247:
243:
239:
235:
231:
230:
225:
223:
219:
215:
214:
206:
203:
202:
201:
196:
193:
190:
187:
186:
185:
183:
182:
177:
175:
174:
169:
167:
166:
161:
154:
152:
148:
143:
139:
134:
131:
128:
120:
118:
116:
112:
108:
104:
100:
95:
93:
88:
86:
82:
78:
74:
70:
66:
61:
59:
53:
50:
46:
42:
37:
33:
19:
2204:Concordancer
2145:
1600:Bag-of-words
1546:
1504:
1500:
1485:What is NLG?
1484:
1465:
1418:
1414:
1404:
1393:
1366:
1356:
1329:
1325:
1301:. Retrieved
1297:
1288:
1277:. Retrieved
1275:. 2013-02-11
1272:
1263:
1242:
1223:
1217:
1181:the original
1176:
1167:
1158:
1145:
1131:
1120:. Retrieved
1117:The Atlantic
1116:
1106:
1081:. Retrieved
1077:
1067:
1060:
1052:
1033:
1029:
1019:
1005:
983:(3): 10โ17.
980:
976:
963:
930:
926:
919:
905:
891:
884:
876:
849:
845:
839:
825:
818:
799:
769:
762:
745:
741:
701:(2): 45โ53.
698:
694:
650:
646:
640:
610:(1): 57โ87.
607:
603:
593:
539:
535:
532:faithfulness
531:
529:
521:
517:
495:
489:
483:
476:
474:
460:
456:
452:
436:
426:via text or
424:conversation
409:
395:
391:
389:
375:
372:
364:
355:
350:
344:
324:
316:
303:data-to-text
302:
300:
292:Applications
278:
273:
269:
251:
250:
237:
227:
226:
221:
217:
211:
210:
204:
199:
194:
188:
179:
178:
171:
170:
163:
162:
158:
150:
145:
141:
136:
132:
126:
124:
107:form letters
96:
89:
62:
54:
44:
40:
39:
2161:Topic model
2041:Text corpus
1887:Statistical
1754:Text mining
1595:AI-complete
867:10292/10691
852:(1): 1โ32.
695:IEEE Expert
266:orthography
253:Realization
232:: Creating
181:Aggregation
81:transpilers
77:decompilers
73:translators
1882:Rule-based
1764:Truecasing
1632:Stop words
1514:1703.09902
1507:: 65โ170.
1428:2202.03629
1303:2022-06-03
1279:2022-06-03
1254:1903.09025
1233:2203.01322
1208:1703.09902
1122:2022-06-03
1083:2022-06-03
660:1703.09902
585:References
536:factuality
477:evaluation
471:Evaluation
262:morphology
103:mail merge
2191:reviewing
1989:standards
1987:Types and
1447:246652372
1385:220330989
1348:235589737
1298:KDnuggets
624:1469-8110
464:The Onion
439:Cleverbot
2315:Category
2107:Wikidata
2087:FrameNet
2072:BabelNet
2051:Treebank
2021:PropBank
1966:Word2vec
1931:fastText
1812:Stemming
1531:16946362
1273:HuffPost
1092:cite web
947:16244840
831:Archived
677:16946362
548:See also
420:software
412:dialogue
406:Chatbots
336:chatbots
246:anaphora
242:pronouns
222:moderate
130:output.
58:chatbots
2278:Related
2244:Chatbot
2102:WordNet
2082:DBpedia
1956:Seq2seq
1700:Parsing
1615:Trigram
997:9544295
955:5569544
715:9709337
632:8460470
496:Metrics
416:chatbot
374:utilize
347:WYSIWYM
328:Gartner
270:will be
121:Example
2251:(c.f.
1909:models
1897:Neural
1610:Bigram
1605:n-gram
1529:
1472:
1445:
1383:
1346:
995:
953:
945:
806:
713:
675:
630:
622:
504:METEOR
264:, and
258:syntax
218:medium
155:Stages
115:corpus
2300:spaCy
1945:large
1936:GloVe
1527:S2CID
1509:arXiv
1490:paper
1443:S2CID
1423:arXiv
1381:S2CID
1344:S2CID
1332:(2).
1249:arXiv
1228:arXiv
1203:arXiv
1155:(PDF)
993:S2CID
973:(PDF)
951:S2CID
774:(PDF)
738:(PDF)
711:S2CID
673:S2CID
655:arXiv
628:S2CID
512:LEPOR
508:ROUGE
274:to be
99:ELIZA
2065:Data
1916:BERT
1470:ISBN
1098:link
943:PMID
804:ISBN
620:ISSN
510:and
500:BLEU
392:have
385:pose
282:LSTM
125:The
56:and
2097:UBY
1519:doi
1433:doi
1371:doi
1334:doi
1038:doi
1034:167
985:doi
935:doi
862:hdl
854:doi
778:doi
750:doi
746:173
703:doi
665:doi
612:doi
220:or
191:and
79:or
45:NLG
34:or
2317::
1525:.
1517:.
1505:61
1503:.
1499:.
1441:.
1431:.
1419:55
1417:.
1413:.
1379:.
1365:.
1342:.
1328:.
1324:.
1312:^
1296:.
1271:.
1189:^
1175:.
1157:.
1115:.
1094:}}
1090:{{
1076:.
1032:.
1028:.
991:.
981:12
979:.
975:.
949:.
941:.
931:19
929:.
860:.
850:36
848:.
829:.
790:^
744:.
740:.
723:^
709:.
697:.
685:^
671:.
663:.
651:61
649:.
626:.
618:.
606:.
602:.
506:,
502:,
334:,
276:.
260:,
248:.
60:.
2255:)
1978:,
1947:)
1943:(
1573:e
1566:t
1559:v
1542:.
1533:.
1521::
1511::
1478:.
1449:.
1435::
1425::
1387:.
1373::
1350:.
1336::
1330:3
1306:.
1282:.
1257:.
1251::
1236:.
1230::
1211:.
1205::
1139:.
1125:.
1100:)
1086:.
1046:.
1040::
999:.
987::
957:.
937::
913:.
899:.
870:.
864::
856::
812:.
784:.
780::
756:.
752::
717:.
705::
699:9
679:.
667::
657::
634:.
614::
608:3
514:.
376:s
207:.
43:(
20:)
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.