BEGIN:VCALENDAR PRODID:-//Microsoft Corporation//Outlook 16.0 MIMEDIR//EN VERSION:2.0 METHOD:PUBLISH X-MS-OLK-FORCEINSPECTOROPEN:TRUE BEGIN:VTIMEZONE TZID:Pacific Standard Time BEGIN:STANDARD DTSTART:16011104T020000 RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11 TZOFFSETFROM:-0700 TZOFFSETTO:-0800 END:STANDARD BEGIN:DAYLIGHT DTSTART:16010311T020000 RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3 TZOFFSETFROM:-0800 TZOFFSETTO:-0700 END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT CLASS:PUBLIC CREATED:20191017T000034Z DESCRIPTION:Automatic speech recognition (ASR) is a core technology to crea te convenient human-computer interfaces. But building ASR systems with com petitive word error rate (WER) traditionally required specialized expertis e\, large labeled datasets\, and complex approaches.\n \nJason Li and Vita ly Lavrukhin dive into how end-to-end models simplified speech recognition and present Jasper\, an end-to-end convolutional neural acoustic model\, which yields state-of-the-art WER on LibriSpeech\, an open dataset for spe ech recognition. They explore its implementation in the TensorFlow-based O penSen2Seq toolkit and how to use it to solve large vocabulary speech reco gnition and speech command recognition problems. OpenSeq2Seq is an open so urce deep learning toolkit. They provide pretrained models for out-of-the- box experimentation.\n \nWhat you'll learn\n* Discover end-to-end speech r ecognition and the OpenSeq2Seq deep learning toolkit\n DTEND;TZID="Pacific Standard Time":20191031T151000 DTSTAMP:20191017T000034Z DTSTART;TZID="Pacific Standard Time":20191031T143000 LAST-MODIFIED:20191017T000034Z LOCATION:TensorFlow World 2019 - Grand Ballroom C/D PRIORITY:5 SEQUENCE:0 SUMMARY;LANGUAGE=en-us:NVIDIA Session: Speech recognition with OpenSeq2Seq TRANSP:OPAQUE UID:040000008200E00074C5B7101A82E00800000000B049B5F34284D501000000000000000 010000000C2F7F7A506E31148A96780583CE240B6 X-ALT-DESC;FMTTYPE=text/html:

Automatic speech recognition (ASR) is a core tec hnology to create convenient human-computer interfaces. But building ASR s ystems with competitive word error rate (WER) traditionally required speci alized expertise\, large labeled datasets\, and complex approaches.

 \;

Jason Li and Vitaly Lavrukhin dive into how end-to-end models simplified speech recognition and present Jasper\, an end-to-end convolutional neura l acoustic model\, which yields state-of-the-art WER on LibriSpeech\, an open dataset for speech recognition. They explore its implementation in the TensorFlow-based OpenSen2Seq toolkit and how to use it to solve large vocabulary speech recognition and speech command re cognition problems. OpenSeq2Seq is an open source deep learning toolkit. T hey provide pretrained models for out-of-the-box experimentation.

 \;

What you'll learn

X-MICROSOFT-CDO-BUSYSTATUS:BUSY X-MICROSOFT-CDO-IMPORTANCE:1 X-MICROSOFT-DISALLOW-COUNTER:FALSE X-MS-OLK-AUTOFILLLOCATION:FALSE X-MS-OLK-CONFTYPE:0 BEGIN:VALARM TRIGGER:-PT15M ACTION:DISPLAY DESCRIPTION:Reminder END:VALARM END:VEVENT END:VCALENDAR