Skip to content

Instantly share code, notes, and snippets.

kpe

  • Aachen
Block or report user

Report or block kpe

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
View SimpleSensing_ESP32.ino
#include <BLEDevice.h>
#include <BLEServer.h>
#include <BLEUtils.h>
#include <BLE2902.h>
uint8_t note = 38;
int SNARE[6] = {150, 4000, 38, 3, 0, 0}; //{threshold, sensitivity, note(no use), flag, velocity, last peakValue}
boolean snareFlag = false;
BLECharacteristic *pCharacteristic;
@shagunsodhani
shagunsodhani / End-to-end optimization of goal-driven and visually grounded dialogue systems.md
Created Apr 5, 2017
Notes for paper "End-to-end optimization of goal-driven and visually grounded dialogue systems"
View End-to-end optimization of goal-driven and visually grounded dialogue systems.md

End-to-end optimization of goal-driven and visually grounded dialogue systems

Introduction

  • The paper introduces an architecture for end-to-end Reinforcement Learning (RL) optimization for task-oriented dialogue systems and its application to a multimodal task - grounding the dialogue into a visual context.

Encoder Decoder Models vs RL Models

  • Encoder Decoder models do not account for the planning problems (which are inherent in the dialogue systems) and do not integrate seamlessly with external contexts or knowledge bases.
  • RL models can handle the planning problem but require online learning and a predefined structure of the task.
You can’t perform that action at this time.