Difference between revisions of "Event:SemEval 2019"

From ConfIDent
m (Text replacement - "Homepage=" to "Official Website=")
 
Line 2: Line 2:
 
|Acronym=SemEval 2019
 
|Acronym=SemEval 2019
 
|Title=13th International Workshop on Semantic Evaluation 2019
 
|Title=13th International Workshop on Semantic Evaluation 2019
|Type=Conference
+
|In Event Series=Event Series:SemEval
|Official Website=http://alt.qcri.org/semeval2019/
+
|Single Day Event=no
 +
|Start Date=2019/06/06
 +
|End Date=2019/06/07
 +
|Event Status=as scheduled
 +
|Event Mode=on site
 
|City=Minneapolis
 
|City=Minneapolis
 
|Region=MN
 
|Region=MN
 
|Country=Country:US
 
|Country=Country:US
 +
|Academic Field=Natural Language Processing
 +
|Official Website=http://alt.qcri.org/semeval2019/
 +
|DOI=10.25798/xe2t-p012
 +
|Type=Conference
 
|Has coordinator=Jonathan May, Ekaterina Shutova, Aurelie Herbelot, Xiaodan Zhu, Marianna Apidianaki, Saif M. Mohammad
 
|Has coordinator=Jonathan May, Ekaterina Shutova, Aurelie Herbelot, Xiaodan Zhu, Marianna Apidianaki, Saif M. Mohammad
 
|pageCreator=User:Curator 19
 
|pageCreator=User:Curator 19
 
|pageEditor=User:Curator 19
 
|pageEditor=User:Curator 19
 
|contributionType=1
 
|contributionType=1
|In Event Series=Event Series:SemEval
 
|Single Day Event=no
 
|Start Date=2019/06/06
 
|End Date=2019/06/07
 
|Academic Field=Natural Language Processing
 
|Event Status=as scheduled
 
|Event Mode=on site
 
 
}}
 
}}
 
{{Event Deadline
 
{{Event Deadline
 +
|Notification Deadline=2019/04/06
 
|Paper Deadline=2019/02/28
 
|Paper Deadline=2019/02/28
 +
|Camera-Ready Deadline=2019/04/20
 
|Tutorial Deadline=2019/03/14
 
|Tutorial Deadline=2019/03/14
|Notification Deadline=2019/04/06
 
|Camera-Ready Deadline=2019/04/20
 
 
|Submission Deadline=2019/02/28
 
|Submission Deadline=2019/02/28
 
}}
 
}}
{{S Event}}
 
 
{{Organizer
 
{{Organizer
 +
|Contributor Type=organization
 
|Organization=Association for Computational Linguistics
 
|Organization=Association for Computational Linguistics
|Contributor Type=organization
 
 
}}
 
}}
 +
{{Event Metric}}
 +
{{S Event}}
 
SemEval has evolved from the SensEval word sense disambiguation evaluation series. The SemEval wikipedia entry and the ACL SemEval Wiki provide a more detailed historical overview. '''SemEval-2019''' will be the '''13th workshop on semantic evaluation'''.  
 
SemEval has evolved from the SensEval word sense disambiguation evaluation series. The SemEval wikipedia entry and the ACL SemEval Wiki provide a more detailed historical overview. '''SemEval-2019''' will be the '''13th workshop on semantic evaluation'''.  
  
Line 39: Line 41:
 
==Important Dates==
 
==Important Dates==
 
'''Task Proposals''':   
 
'''Task Proposals''':   
* 26 Mar 2018: Task proposals due
+
*26 Mar 2018: Task proposals due
* 04 May 2018: Task proposal notifications
+
*04 May 2018: Task proposal notifications
  
 
'''Setup for the Competition''':
 
'''Setup for the Competition''':
Line 47: Line 49:
  
 
'''Competition and Beyond:'''
 
'''Competition and Beyond:'''
* 10 Jan 2019: Evaluation start*
+
*10 Jan 2019: Evaluation start*
* 31 Jan 2019: Evaluation end*
+
*31 Jan 2019: Evaluation end*
* 05 Feb 2019: Results posted
+
*05 Feb 2019: Results posted
* 28 Feb 2019: System and Task description paper submissions due by 23:59 GMT -12:00
+
*28 Feb 2019: System and Task description paper submissions due by 23:59 GMT -12:00
* 14 Mar 2019 Paper reviews due (for both systems and tasks)
+
*14 Mar 2019 Paper reviews due (for both systems and tasks)
* 06 Apr 2019: Author notifications
+
*06 Apr 2019: Author notifications
* 20 Apr 2019: Camera ready submissions due
+
*20 Apr 2019: Camera ready submissions due
* Summer 2019: SemEval 2019
+
*Summer 2019: SemEval 2019
  
* 10 Jan to 31 Jan 2019 is the period during which the task organizers must schedule the evaluation periods for their individual tasks. Usually, evaluation periods for individual tasks are 7 to 14 days, but there is no hard and fast rule about this. Contact the task organizers for the tasks you are interested in for the exact time frame when they will conduct their evaluations. They should tell you the date by which they will release the test data, and the date by which participant submissions are to be uploaded. Note that some tasks may involve more than one sub-task, each having a separate evaluation time frame.
+
*10 Jan to 31 Jan 2019 is the period during which the task organizers must schedule the evaluation periods for their individual tasks. Usually, evaluation periods for individual tasks are 7 to 14 days, but there is no hard and fast rule about this. Contact the task organizers for the tasks you are interested in for the exact time frame when they will conduct their evaluations. They should tell you the date by which they will release the test data, and the date by which participant submissions are to be uploaded. Note that some tasks may involve more than one sub-task, each having a separate evaluation time frame.
  
 
==Organizers==
 
==Organizers==
* Jonathan May, ISI, University of Southern California
+
*Jonathan May, ISI, University of Southern California
* Ekaterina Shutova, University of Cambridge
+
*Ekaterina Shutova, University of Cambridge
* Aurelie Herbelot, University of Trento
+
*Aurelie Herbelot, University of Trento
* Xiaodan Zhu, Queen's University
+
*Xiaodan Zhu, Queen's University
* Marianna Apidianaki, LIMSI, CNRS, Université Paris-Saclay & University of Pennsylvania
+
*Marianna Apidianaki, LIMSI, CNRS, Université Paris-Saclay & University of Pennsylvania
* Saif M. Mohammad, National Research Council Canada
+
*Saif M. Mohammad, National Research Council Canada
  
  
Line 75: Line 77:
 
to propose challenging research problems in semantics and to build systems/techniques to address such
 
to propose challenging research problems in semantics and to build systems/techniques to address such
 
research problems.
 
research problems.
'''SemEval-2019 is the thirteenth workshop in the series of International Workshops on Semantic
+
'''SemEval-2019 is the thirteenth workshop in the series of International Workshops on Semantic'''
Evaluation'''. The first three workshops, SensEval-1 (1998), SensEval-2 (2001), and SensEval-3 (2004),
+
Evaluation'''. The first three workshops, SensEval-1 (1998), SensEval-2 (2001), and SensEval-3 (2004),'''
 
focused on word sense disambiguation, each time growing in the number of languages offered, in the
 
focused on word sense disambiguation, each time growing in the number of languages offered, in the
 
number of tasks, and also in the number of participating teams. In 2007, the workshop was renamed
 
number of tasks, and also in the number of participating teams. In 2007, the workshop was renamed
Line 82: Line 84:
 
word sense disambiguation. In 2012, SemEval turned into a yearly event. It currently runs every year,
 
word sense disambiguation. In 2012, SemEval turned into a yearly event. It currently runs every year,
 
but on a two-year cycle, i.e., the tasks for SemEval 2019 were proposed in 2018.
 
but on a two-year cycle, i.e., the tasks for SemEval 2019 were proposed in 2018.
SemEval-2019 was co-located with the '''17th Annual Conference of the North American Chapter of the
+
SemEval-2019 was co-located with the '''17th Annual Conference of the North American Chapter of the'''
Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2019''') in
+
Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2019''') in'''
 
Minneapolis, Minnesota, USA.
 
Minneapolis, Minnesota, USA.

Latest revision as of 12:05, 4 August 2023

Deadlines
2019-04-06
2019-02-28
2019-04-20
2019-03-14
2019-02-28
28
Feb
2019
Paper
28
Feb
2019
Submission
14
Mar
2019
Tutorial
6
Apr
2019
Notification
20
Apr
2019
Camera-Ready
organization
Metrics
Venue

Minneapolis, MN, United States of America

Loading map...

SemEval has evolved from the SensEval word sense disambiguation evaluation series. The SemEval wikipedia entry and the ACL SemEval Wiki provide a more detailed historical overview. SemEval-2019 will be the 13th workshop on semantic evaluation.

see also NAACL_HLT_2019

SemEval-2019 will be held June 6-7, 2019 in Minneapolis, USA, collocated with the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019).

Important Dates

Task Proposals:

  • 26 Mar 2018: Task proposals due
  • 04 May 2018: Task proposal notifications

Setup for the Competition: 20 Aug 2018: CodaLab competition website ready and made public. Should include basic task description and mailing group information for the task. Trial data ready. Evaluation script ready for participants to download and run on the trial data. 17 Sep 2018: Training data ready. Development data ready. CodaLab competition website updated to include an evaluation script uploaded as part of the competition so that participants can upload submissions on the development set and the script immediately checks the submission for format and computes the results on the development set. This is also the date by which a benchmark system should be made available to participants. Also, the organizers should run the submission created with the benchmark system on CodaLab, so that participants can see its results on the LeaderBoard.

Competition and Beyond:

  • 10 Jan 2019: Evaluation start*
  • 31 Jan 2019: Evaluation end*
  • 05 Feb 2019: Results posted
  • 28 Feb 2019: System and Task description paper submissions due by 23:59 GMT -12:00
  • 14 Mar 2019 Paper reviews due (for both systems and tasks)
  • 06 Apr 2019: Author notifications
  • 20 Apr 2019: Camera ready submissions due
  • Summer 2019: SemEval 2019
  • 10 Jan to 31 Jan 2019 is the period during which the task organizers must schedule the evaluation periods for their individual tasks. Usually, evaluation periods for individual tasks are 7 to 14 days, but there is no hard and fast rule about this. Contact the task organizers for the tasks you are interested in for the exact time frame when they will conduct their evaluations. They should tell you the date by which they will release the test data, and the date by which participant submissions are to be uploaded. Note that some tasks may involve more than one sub-task, each having a separate evaluation time frame.

Organizers

  • Jonathan May, ISI, University of Southern California
  • Ekaterina Shutova, University of Cambridge
  • Aurelie Herbelot, University of Trento
  • Xiaodan Zhu, Queen's University
  • Marianna Apidianaki, LIMSI, CNRS, Université Paris-Saclay & University of Pennsylvania
  • Saif M. Mohammad, National Research Council Canada


Introduction

Welcome to SemEval-2019! The Semantic Evaluation (SemEval) series of workshops focuses on the evaluation and comparison of systems that can analyse diverse semantic phenomena in text with the aim of extending the current state of the art in semantic analysis and creating high quality annotated datasets in a range of increasingly challenging problems in natural language semantics. SemEval provides an exciting forum for researchers to propose challenging research problems in semantics and to build systems/techniques to address such research problems. SemEval-2019 is the thirteenth workshop in the series of International Workshops on Semantic Evaluation. The first three workshops, SensEval-1 (1998), SensEval-2 (2001), and SensEval-3 (2004), focused on word sense disambiguation, each time growing in the number of languages offered, in the number of tasks, and also in the number of participating teams. In 2007, the workshop was renamed to SemEval, and the subsequent SemEval workshops evolved to include semantic analysis tasks beyond word sense disambiguation. In 2012, SemEval turned into a yearly event. It currently runs every year, but on a two-year cycle, i.e., the tasks for SemEval 2019 were proposed in 2018. SemEval-2019 was co-located with the 17th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2019) in Minneapolis, Minnesota, USA.

Cookies help us deliver our services. By using our services, you agree to our use of cookies.