create mod_dialogflow_cx project

This commit is contained in:
Dave Horton
2024-08-14 08:42:49 -04:00
parent b003ab0875
commit 20bdcb6687
9 changed files with 1740 additions and 0 deletions

View File

@@ -0,0 +1,10 @@
include $(top_srcdir)/build/modmake.rulesam
MODNAME=mod_dialogflow_cx
mod_LTLIBRARIES = mod_dialogflow_cx.la
mod_dialogflow_cx_la_SOURCES = mod_dialogflow.c google_glue.cpp parser.cpp
mod_dialogflow_cx_la_CFLAGS = $(AM_CFLAGS)
mod_dialogflow_cx_la_CXXFLAGS = -I $(top_srcdir)/libs/googleapis/gens $(AM_CXXFLAGS) -std=c++17
mod_dialogflow_cx_la_LIBADD = $(switch_builddir)/libfreeswitch.la
mod_dialogflow_cx_la_LDFLAGS = -avoid-version -module -no-undefined -shared `pkg-config --libs grpc++ grpc`

View File

@@ -0,0 +1,84 @@
# mod_dialogflow
A Freeswitch module that connects a Freeswitch channel to a [dialogflow agent](https://dialogflow.com/docs/getting-started/first-agent) so that an IVR interaction can be driven completely by dialogflow logic.
Once a Freeswitch channel is connected to a dialogflow agent, media is streamed to the dialogflow service, which returns information describing the "intent" that was detected, along with transcriptions and audio prompts and text to play to the caller. The handling of returned audio by the module is two-fold:
1. If an audio clip was returned, it is *not* immediately played to the caller, but instead is written to a temporary wave file on the Freeswitch server.
2. Next, a Freeswitch custom event is sent to the application containing the details of the dialogflow response as well as the path to the wave file.
This allows the application whether to decide to play the returned audio clip (via the mod_dptools 'play' command), or to use a text-to-speech service to generate audio using the returned prompt text.
## API
### Commands
The freeswitch module exposes the following API commands:
#### dialogflow_start
```
dialogflow_start <uuid> <project-id> <lang-code> [<event>]
```
Attaches media bug to channel and performs streaming recognize request.
- `uuid` - unique identifier of Freeswitch channel
- `project-id` - the identifier of the dialogflow project to execute, which may optionally include a dialogflow environment, a region and output audio configurations (see below).
- `project-id` - the identifier of the dialogflow project to execute, which may optionally include a dialogflow environment, a region and output audio configurations (see below).
- `lang-code` - a valid dialogflow [language tag](https://dialogflow.com/docs/reference/language) to use for speech recognition
- `event` - name of an initial event to send to dialogflow; e.g. to trigger an initial prompt
When executing a dialogflow project, the environment and region will default to 'draft' and 'us', respectively.
To specify both an environment and a region, provide a value for project-id in the dialogflow_start command as follows:
```
dialogflow-project-id:environment:region, i.e myproject:production:eu-west1
```
To specify environment and default to the global region:
```
dialogflow-project-id:environment, i.e myproject:production
```
To specify a region and default environment:
```
dialogflow-project-id::region, i.e myproject::eu-west1
```
To simply use the defaults for both environment and region:
```
dialogflow-project-id, i.e myproject
```
By default, [Output Audio configurations](https://cloud.google.com/dialogflow/es/docs/reference/rest/v2/OutputAudioConfig) and [Sentiment Analysis](https://cloud.google.com/dialogflow/es/docs/reference/rpc/google.cloud.dialogflow.v2beta1#google.cloud.dialogflow.v2beta1.SentimentAnalysisRequestConfig) will be ignored and the configs selected for [your agent in Dialogflow platform](https://dialogflow.cloud.google.com/) will be used, however if you wish to abstract your implementation from the platform and define them programatically it can be done in the dialogflow_start command as follows:
```
dialogflow-project-id:environment:region:speakingRate:pitch:volume:voice-name:voice-gender:effect:sentiment-analysis
```
Example:
```
myproject:production:eu-west1:1.1:1.5:2.5:en-GB-Standard-D:F:handset-class-device:true
```
Speaking rate, pitch and volume should take the value of a double. Information [here](https://cloud.google.com/dialogflow/es/docs/reference/rest/v2/projects.agent.environments#synthesizespeechconfig).
Voice Name should take a valid Text-to-speech model name (choose available voices from https://cloud.google.com/text-to-speech/docs/voices). If not set, the Dialogflow service will choose a voice based on the other parameters such as language code and gender.
Voice Gender should be M for Male, F for Female, N for neutral gender or leave empty for Unspecified. If not set, the Dialogflow service will choose a voice based on the other parameters such as language code and name. Note that this is only a preference, not requirement. If a voice of the appropriate gender is not available, the synthesizer should substitute a voice with a different gender rather than failing the request.
Effects are applied on the text-to-speech and are used to improve the playback of an audio on different types of hardware. Available effects and information [here](https://cloud.google.com/text-to-speech/docs/audio-profiles#available_audio_profiles).
Sentiment Analysis uses Cloud Natural Language to provide a sentiment score for each user query. To enable send the boolean ```true```.
#### dialogflow_stop
```
dialogflow_stop <uuid>
```
Stops dialogflow on the channel.
### Events
* `dialogflow::intent` - a dialogflow [intent](https://dialogflow.com/docs/intents) has been detected.
* `dialogflow::transcription` - a transcription has been returned
* `dialogflow::audio_provided` - an audio prompt has been returned from dialogflow. Dialogflow will return both an audio clip in linear 16 format, as well as the text of the prompt. The audio clip will be played out to the caller and the prompt text is returned to the application in this event.
* `dialogflow::end_of_utterance` - dialogflow has detected the end of an utterance
* `dialogflow::error` - dialogflow has returned an error
## Usage
When using [drachtio-fsrmf](https://www.npmjs.com/package/drachtio-fsmrf), you can access this API command via the api method on the 'endpoint' object.
```js
ep.api('dialogflow_start', `${ep.uuid} my-agent-uuxr:production en-US welcome`);
```
## Examples
[drachtio-dialogflow-phone-gateway](https://github.com/davehorton/drachtio-dialogflow-phone-gateway)

View File

@@ -0,0 +1,5 @@
<configuration name="dialogflow.conf" description="Google Dialogflow Configuration">
<settings>
<param name="google-application-credentials-json-file" value="/tmp/gcs_service_account_key.json"/>
</settings>
</configuration>

View File

@@ -0,0 +1,595 @@
#include <cstdlib>
#include <switch.h>
#include <switch_json.h>
#include <grpc++/grpc++.h>
#include <string.h>
#include <mutex>
#include <condition_variable>
#include <regex>
#include <fstream>
#include <string>
#include <sstream>
#include <map>
#include "google/cloud/dialogflow/cx/v3/session.grpc.pb.h"
#include "mod_dialogflow.h"
#include "parser.h"
using google::cloud::dialogflow::v2beta1::Sessions;
using google::cloud::dialogflow::v2beta1::StreamingDetectIntentRequest;
using google::cloud::dialogflow::v2beta1::StreamingDetectIntentResponse;
using google::cloud::dialogflow::v2beta1::AudioEncoding;
using google::cloud::dialogflow::v2beta1::InputAudioConfig;
using google::cloud::dialogflow::v2beta1::OutputAudioConfig;
using google::cloud::dialogflow::v2beta1::SynthesizeSpeechConfig;
using google::cloud::dialogflow::v2beta1::QueryInput;
using google::cloud::dialogflow::v2beta1::QueryResult;
using google::cloud::dialogflow::v2beta1::StreamingRecognitionResult;
using google::cloud::dialogflow::v2beta1::EventInput;
using google::rpc::Status;
using google::protobuf::Struct;
using google::protobuf::Value;
using google::protobuf::MapPair;
static uint64_t playCount = 0;
static std::multimap<std::string, std::string> audioFiles;
static bool hasDefaultCredentials = false;
static switch_status_t hanguphook(switch_core_session_t *session) {
switch_channel_t *channel = switch_core_session_get_channel(session);
switch_channel_state_t state = switch_channel_get_state(channel);
if (state == CS_HANGUP || state == CS_ROUTING) {
char * sessionId = switch_core_session_get_uuid(session);
typedef std::multimap<std::string, std::string>::iterator MMAPIterator;
std::pair<MMAPIterator, MMAPIterator> result = audioFiles.equal_range(sessionId);
for (MMAPIterator it = result.first; it != result.second; it++) {
std::string filename = it->second;
std::remove(filename.c_str());
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_DEBUG,
"google_dialogflow_session_cleanup: removed audio file %s\n", filename.c_str());
}
audioFiles.erase(sessionId);
switch_core_event_hook_remove_state_change(session, hanguphook);
}
return SWITCH_STATUS_SUCCESS;
}
static void parseEventParams(Struct* grpcParams, cJSON* json) {
auto* map = grpcParams->mutable_fields();
int count = cJSON_GetArraySize(json);
for (int i = 0; i < count; i++) {
cJSON* prop = cJSON_GetArrayItem(json, i);
if (prop) {
google::protobuf::Value v;
switch (prop->type) {
case cJSON_False:
case cJSON_True:
v.set_bool_value(prop->type == cJSON_True);
break;
case cJSON_Number:
v.set_number_value(prop->valuedouble);
break;
case cJSON_String:
v.set_string_value(prop->valuestring);
break;
case cJSON_Array:
case cJSON_Object:
case cJSON_Raw:
case cJSON_NULL:
continue;
}
map->insert(MapPair<std::string, Value>(prop->string, v));
}
}
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "parseEventParams: added %d event params\n", map->size());
}
void tokenize(std::string const &str, const char delim, std::vector<std::string> &out) {
size_t start = 0;
size_t end = 0;
bool finished = false;
do {
end = str.find(delim, start);
if (end == std::string::npos) {
finished = true;
out.push_back(str.substr(start));
}
else {
out.push_back(str.substr(start, end - start));
start = ++end;
}
} while (!finished);
}
class GStreamer {
public:
GStreamer(switch_core_session_t *session, const char* lang, char* projectId, char* event, char* text) :
m_lang(lang), m_sessionId(switch_core_session_get_uuid(session)), m_environment("draft"), m_regionId("us"),
m_speakingRate(), m_pitch(), m_volume(), m_voiceName(""), m_voiceGender(""), m_effects(""),
m_sentimentAnalysis(false), m_finished(false), m_packets(0) {
const char* var;
switch_channel_t* channel = switch_core_session_get_channel(session);
std::vector<std::string> tokens;
const char delim = ':';
tokenize(projectId, delim, tokens);
int idx = 0;
for (auto &s: tokens) {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "GStreamer: token %d: '%s'\n", idx, s.c_str());
if (0 == idx) m_projectId = s;
else if (1 == idx && s.length() > 0) m_environment = s;
else if (2 == idx && s.length() > 0) m_regionId = s;
else if (3 == idx && s.length() > 0) m_speakingRate = stod(s);
else if (4 == idx && s.length() > 0) m_pitch = stod(s);
else if (5 == idx && s.length() > 0) m_volume = stod(s);
else if (6 == idx && s.length() > 0) m_voiceName = s;
else if (7 == idx && s.length() > 0) m_voiceGender = s;
else if (8 == idx && s.length() > 0) m_effects = s;
else if (9 == idx && s.length() > 0) m_sentimentAnalysis = (s == "true");
idx++;
}
std::string endpoint = "dialogflow.googleapis.com";
if (0 != m_regionId.compare("us")) {
endpoint = m_regionId;
endpoint.append("-dialogflow.googleapis.com:443");
}
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_INFO,
"GStreamer dialogflow endpoint is %s, region is %s, project is %s, environment is %s\n",
endpoint.c_str(), m_regionId.c_str(), m_projectId.c_str(), m_environment.c_str());
if (var = switch_channel_get_variable(channel, "GOOGLE_APPLICATION_CREDENTIALS")) {
auto callCreds = grpc::ServiceAccountJWTAccessCredentials(var, INT64_MAX);
auto channelCreds = grpc::SslCredentials(grpc::SslCredentialsOptions());
auto creds = grpc::CompositeChannelCredentials(channelCreds, callCreds);
m_channel = grpc::CreateChannel(endpoint, creds);
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "GStreamer json credentials are %s\n", var);
}
else {
auto creds = grpc::GoogleDefaultCredentials();
m_channel = grpc::CreateChannel(endpoint, creds);
}
startStream(session, event, text);
}
~GStreamer() {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "GStreamer::~GStreamer wrote %ld packets %p\n", m_packets, this);
}
void startStream(switch_core_session_t *session, const char* event, const char* text) {
char szSession[256];
m_request = std::make_shared<StreamingDetectIntentRequest>();
m_context= std::make_shared<grpc::ClientContext>();
m_stub = Sessions::NewStub(m_channel);
snprintf(szSession, 256, "projects/%s/locations/%s/agent/environments/%s/users/-/sessions/%s",
m_projectId.c_str(), m_regionId.c_str(), m_environment.c_str(), m_sessionId.c_str());
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_INFO, "GStreamer::startStream session %s, event %s, text %s %p\n", szSession, event, text, this);
m_request->set_session(szSession);
auto* queryInput = m_request->mutable_query_input();
if (event) {
auto* eventInput = queryInput->mutable_event();
eventInput->set_name(event);
eventInput->set_language_code(m_lang.c_str());
if (text) {
cJSON* json = cJSON_Parse(text);
if (!json) {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "GStreamer::startStream ignoring event params since it is not json %s\n", text);
}
else {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "GStreamer::startStream adding event params (JSON) %s\n", text);
auto* eventParams = eventInput->mutable_parameters();
parseEventParams(eventParams, json);
cJSON_Delete(json);
}
}
}
else if (text) {
auto* textInput = queryInput->mutable_text();
textInput->set_text(text);
textInput->set_language_code(m_lang.c_str());
}
else {
auto* audio_config = queryInput->mutable_audio_config();
audio_config->set_sample_rate_hertz(16000);
audio_config->set_audio_encoding(AudioEncoding::AUDIO_ENCODING_LINEAR_16);
audio_config->set_language_code(m_lang.c_str());
audio_config->set_single_utterance(true);
}
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "GStreamer::startStream checking OutputAudioConfig custom parameters: speaking rate %f,"
" pitch %f, volume %f, voice name '%s' gender '%s', effects '%s'\n", m_speakingRate,
m_pitch, m_volume, m_voiceName.c_str(), m_voiceGender.c_str(), m_effects.c_str());
if (isAnyOutputAudioConfigChanged()) {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_INFO, "GStreamer::startStream adding a custom OutputAudioConfig to the request since at"
" least one parameter was received.");
auto* outputAudioConfig = m_request->mutable_output_audio_config();
outputAudioConfig->set_sample_rate_hertz(16000);
outputAudioConfig->set_audio_encoding(OutputAudioEncoding::OUTPUT_AUDIO_ENCODING_LINEAR_16);
auto* synthesizeSpeechConfig = outputAudioConfig->mutable_synthesize_speech_config();
if (m_speakingRate) synthesizeSpeechConfig->set_speaking_rate(m_speakingRate);
if (m_pitch) synthesizeSpeechConfig->set_pitch(m_pitch);
if (m_volume) synthesizeSpeechConfig->set_volume_gain_db(m_volume);
if (!m_effects.empty()) synthesizeSpeechConfig->add_effects_profile_id(m_effects);
auto* voice = synthesizeSpeechConfig->mutable_voice();
if (!m_voiceName.empty()) voice->set_name(m_voiceName);
if (!m_voiceGender.empty()) {
SsmlVoiceGender gender = SsmlVoiceGender::SSML_VOICE_GENDER_UNSPECIFIED;
switch (toupper(m_voiceGender[0]))
{
case 'F': gender = SsmlVoiceGender::SSML_VOICE_GENDER_MALE; break;
case 'M': gender = SsmlVoiceGender::SSML_VOICE_GENDER_FEMALE; break;
case 'N': gender = SsmlVoiceGender::SSML_VOICE_GENDER_NEUTRAL; break;
}
voice->set_ssml_gender(gender);
}
} else {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_INFO, "GStreamer::startStream no custom parameters for OutputAudioConfig, keeping default");
}
if (m_sentimentAnalysis) {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_INFO, "GStreamer::startStream received sentiment analysis flag as true, adding as query param");
auto* queryParameters = m_request->mutable_query_params();
auto* sentimentAnalysisConfig = queryParameters->mutable_sentiment_analysis_request_config();
sentimentAnalysisConfig->set_analyze_query_text_sentiment(m_sentimentAnalysis);
}
m_streamer = m_stub->StreamingDetectIntent(m_context.get());
m_streamer->Write(*m_request);
}
bool write(void* data, uint32_t datalen) {
if (m_finished) {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "GStreamer::write not writing because we are finished, %p\n", this);
return false;
}
m_request->clear_query_input();
m_request->clear_query_params();
m_request->set_input_audio(data, datalen);
m_packets++;
return m_streamer->Write(*m_request);
}
bool read(StreamingDetectIntentResponse* response) {
return m_streamer->Read(response);
}
grpc::Status finish() {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "GStreamer::finish %p\n", this);
if (m_finished) {
grpc::Status ok;
return ok;
}
m_finished = true;
return m_streamer->Finish();
}
void writesDone() {
m_streamer->WritesDone();
}
bool isFinished() {
return m_finished;
}
bool isAnyOutputAudioConfigChanged() {
return m_speakingRate|| m_pitch || m_volume || !m_voiceName.empty() || !m_voiceGender.empty() || !m_effects.empty();
}
private:
std::string m_sessionId;
std::shared_ptr<grpc::ClientContext> m_context;
std::shared_ptr<grpc::Channel> m_channel;
std::unique_ptr<Sessions::Stub> m_stub;
std::unique_ptr< grpc::ClientReaderWriterInterface<StreamingDetectIntentRequest, StreamingDetectIntentResponse> > m_streamer;
std::shared_ptr<StreamingDetectIntentRequest> m_request;
std::string m_lang;
std::string m_projectId;
std::string m_environment;
std::string m_regionId;
double m_speakingRate;
double m_pitch;
double m_volume;
std::string m_effects;
std::string m_voiceName;
std::string m_voiceGender;
bool m_sentimentAnalysis;
bool m_finished;
uint32_t m_packets;
};
static void killcb(struct cap_cb* cb) {
if (cb) {
if (cb->streamer) {
GStreamer* p = (GStreamer *) cb->streamer;
delete p;
cb->streamer = NULL;
}
if (cb->resampler) {
speex_resampler_destroy(cb->resampler);
cb->resampler = NULL;
}
}
}
static void *SWITCH_THREAD_FUNC grpc_read_thread(switch_thread_t *thread, void *obj) {
struct cap_cb *cb = (struct cap_cb *) obj;
GStreamer* streamer = (GStreamer *) cb->streamer;
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "grpc_read_thread: starting cb %p\n", (void *) cb);
// Our contract: while we are reading, cb and cb->streamer will not be deleted
// Read responses until there are no more
StreamingDetectIntentResponse response;
while (streamer->read(&response)) {
switch_core_session_t* psession = switch_core_session_locate(cb->sessionId);
if (psession) {
switch_channel_t* channel = switch_core_session_get_channel(psession);
GRPCParser parser(psession);
if (response.has_query_result() || response.has_recognition_result()) {
cJSON* jResponse = parser.parse(response) ;
char* json = cJSON_PrintUnformatted(jResponse);
const char* type = DIALOGFLOW_EVENT_TRANSCRIPTION;
if (response.has_query_result()) type = DIALOGFLOW_EVENT_INTENT;
else {
const StreamingRecognitionResult_MessageType& o = response.recognition_result().message_type();
if (0 == StreamingRecognitionResult_MessageType_Name(o).compare("END_OF_SINGLE_UTTERANCE")) {
type = DIALOGFLOW_EVENT_END_OF_UTTERANCE;
}
}
cb->responseHandler(psession, type, json);
free(json);
cJSON_Delete(jResponse);
}
const std::string& audio = parser.parseAudio(response);
bool playAudio = !audio.empty() ;
// save audio
if (playAudio) {
std::ostringstream s;
s << SWITCH_GLOBAL_dirs.temp_dir << SWITCH_PATH_SEPARATOR <<
cb->sessionId << "_" << ++playCount;
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(psession), SWITCH_LOG_DEBUG, "grpc_read_thread: received audio to play\n");
if (response.has_output_audio_config()) {
const OutputAudioConfig& cfg = response.output_audio_config();
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(psession), SWITCH_LOG_DEBUG, "grpc_read_thread: encoding is %d\n", cfg.audio_encoding());
if (cfg.audio_encoding() == OutputAudioEncoding::OUTPUT_AUDIO_ENCODING_MP3) {
s << ".mp3";
}
else if (cfg.audio_encoding() == OutputAudioEncoding::OUTPUT_AUDIO_ENCODING_OGG_OPUS) {
s << ".opus";
}
else {
s << ".wav";
}
}
std::ofstream f(s.str(), std::ofstream::binary);
f << audio;
f.close();
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(psession), SWITCH_LOG_DEBUG, "grpc_read_thread: wrote audio to %s\n", s.str().c_str());
// add the file to the list of files played for this session,
// we'll delete when session closes
audioFiles.insert(std::pair<std::string, std::string>(cb->sessionId, s.str()));
cJSON * jResponse = cJSON_CreateObject();
cJSON_AddItemToObject(jResponse, "path", cJSON_CreateString(s.str().c_str()));
char* json = cJSON_PrintUnformatted(jResponse);
cb->responseHandler(psession, DIALOGFLOW_EVENT_AUDIO_PROVIDED, json);
free(json);
cJSON_Delete(jResponse);
}
switch_core_session_rwunlock(psession);
}
else {
break;
}
}
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "dialogflow read loop is done\n");
// finish the detect intent session: here is where we may get an error if credentials are invalid
switch_core_session_t* psession = switch_core_session_locate(cb->sessionId);
if (psession) {
grpc::Status status = streamer->finish();
if (!status.ok()) {
std::ostringstream s;
s << "{\"msg\": \"" << status.error_message() << "\", \"code\": " << status.error_code();
if (status.error_details().length() > 0) {
s << ", \"details\": \"" << status.error_details() << "\"";
}
s << "}";
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_CRIT, "StreamingDetectIntentRequest finished with err %s (%d): %s\n",
status.error_message().c_str(), status.error_code(), status.error_details().c_str());
cb->errorHandler(psession, s.str().c_str());
}
switch_core_session_rwunlock(psession);
}
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "dialogflow read thread exiting \n");
return NULL;
}
extern "C" {
switch_status_t google_dialogflow_init() {
const char* gcsServiceKeyFile = std::getenv("GOOGLE_APPLICATION_CREDENTIALS");
if (NULL == gcsServiceKeyFile) {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_NOTICE,
"\"GOOGLE_APPLICATION_CREDENTIALS\" environment variable is not set; authentication will use \"GOOGLE_APPLICATION_CREDENTIALS\" channel variable\n");
}
else {
hasDefaultCredentials = true;
}
return SWITCH_STATUS_SUCCESS;
}
switch_status_t google_dialogflow_cleanup() {
return SWITCH_STATUS_SUCCESS;
}
// start dialogflow on a channel
switch_status_t google_dialogflow_session_init(
switch_core_session_t *session,
responseHandler_t responseHandler,
errorHandler_t errorHandler,
uint32_t samples_per_second,
char* lang,
char* projectId,
char* event,
char* text,
struct cap_cb **ppUserData
) {
switch_status_t status = SWITCH_STATUS_SUCCESS;
switch_channel_t *channel = switch_core_session_get_channel(session);
int err;
switch_threadattr_t *thd_attr = NULL;
switch_memory_pool_t *pool = switch_core_session_get_pool(session);
struct cap_cb* cb = (struct cap_cb *) switch_core_session_alloc(session, sizeof(*cb));
if (!hasDefaultCredentials && !switch_channel_get_variable(channel, "GOOGLE_APPLICATION_CREDENTIALS")) {
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_ERROR,
"missing credentials: GOOGLE_APPLICATION_CREDENTIALS must be suuplied either as an env variable (path to file) or a channel variable (json string)\n");
status = SWITCH_STATUS_FALSE;
goto done;
}
strncpy(cb->sessionId, switch_core_session_get_uuid(session), 256);
cb->responseHandler = responseHandler;
cb->errorHandler = errorHandler;
if (switch_mutex_init(&cb->mutex, SWITCH_MUTEX_NESTED, pool) != SWITCH_STATUS_SUCCESS) {
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_ERROR, "Error initializing mutex\n");
status = SWITCH_STATUS_FALSE;
goto done;
}
strncpy(cb->lang, lang, MAX_LANG);
strncpy(cb->projectId, lang, MAX_PROJECT_ID);
cb->streamer = new GStreamer(session, lang, projectId, event, text);
cb->resampler = speex_resampler_init(1, 8000, 16000, SWITCH_RESAMPLE_QUALITY, &err);
if (0 != err) {
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_ERROR, "%s: Error initializing resampler: %s.\n",
switch_channel_get_name(channel), speex_resampler_strerror(err));
status = SWITCH_STATUS_FALSE;
goto done;
}
// hangup hook to clear temp audio files
switch_core_event_hook_add_state_change(session, hanguphook);
// create the read thread
switch_threadattr_create(&thd_attr, pool);
//switch_threadattr_detach_set(thd_attr, 1);
switch_threadattr_stacksize_set(thd_attr, SWITCH_THREAD_STACKSIZE);
switch_thread_create(&cb->thread, thd_attr, grpc_read_thread, cb, pool);
*ppUserData = cb;
done:
if (status != SWITCH_STATUS_SUCCESS) {
killcb(cb);
}
return status;
}
switch_status_t google_dialogflow_session_stop(switch_core_session_t *session, int channelIsClosing) {
switch_channel_t *channel = switch_core_session_get_channel(session);
switch_media_bug_t *bug = (switch_media_bug_t*) switch_channel_get_private(channel, MY_BUG_NAME);
if (bug) {
struct cap_cb *cb = (struct cap_cb *) switch_core_media_bug_get_user_data(bug);
switch_status_t st;
// close connection and get final responses
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_DEBUG, "google_dialogflow_session_cleanup: acquiring lock\n");
switch_mutex_lock(cb->mutex);
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_DEBUG, "google_dialogflow_session_cleanup: acquired lock\n");
GStreamer* streamer = (GStreamer *) cb->streamer;
if (streamer) {
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_DEBUG, "google_dialogflow_session_cleanup: sending writesDone..\n");
streamer->writesDone();
streamer->finish();
}
if (cb->thread) {
switch_status_t retval;
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_INFO, "google_dialogflow_session_cleanup: waiting for read thread to complete\n");
switch_thread_join(&retval, cb->thread);
cb->thread = NULL;
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_INFO, "google_dialogflow_session_cleanup: read thread completed\n");
}
killcb(cb);
switch_channel_set_private(channel, MY_BUG_NAME, NULL);
if (!channelIsClosing) switch_core_media_bug_remove(session, &bug);
switch_mutex_unlock(cb->mutex);
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_INFO, "google_dialogflow_session_cleanup: Closed google session\n");
return SWITCH_STATUS_SUCCESS;
}
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_INFO, "%s Bug is not attached.\n", switch_channel_get_name(channel));
return SWITCH_STATUS_FALSE;
}
switch_bool_t google_dialogflow_frame(switch_media_bug_t *bug, void* user_data) {
switch_core_session_t *session = switch_core_media_bug_get_session(bug);
uint8_t data[SWITCH_RECOMMENDED_BUFFER_SIZE];
switch_frame_t frame = {};
struct cap_cb *cb = (struct cap_cb *) user_data;
frame.data = data;
frame.buflen = SWITCH_RECOMMENDED_BUFFER_SIZE;
if (switch_mutex_trylock(cb->mutex) == SWITCH_STATUS_SUCCESS) {
GStreamer* streamer = (GStreamer *) cb->streamer;
if (streamer && !streamer->isFinished()) {
while (switch_core_media_bug_read(bug, &frame, SWITCH_TRUE) == SWITCH_STATUS_SUCCESS && !switch_test_flag((&frame), SFF_CNG)) {
if (frame.datalen) {
spx_int16_t out[SWITCH_RECOMMENDED_BUFFER_SIZE];
spx_uint32_t out_len = SWITCH_RECOMMENDED_BUFFER_SIZE;
spx_uint32_t in_len = frame.samples;
size_t written;
speex_resampler_process_interleaved_int(cb->resampler, (const spx_int16_t *) frame.data, (spx_uint32_t *) &in_len, &out[0], &out_len);
streamer->write( &out[0], sizeof(spx_int16_t) * out_len);
}
}
}
else {
//switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_DEBUG,
// "google_dialogflow_frame: not sending audio because google channel has been closed\n");
}
switch_mutex_unlock(cb->mutex);
}
else {
//switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_DEBUG,
// "google_dialogflow_frame: not sending audio since failed to get lock on mutex\n");
}
return SWITCH_TRUE;
}
void destroyChannelUserData(struct cap_cb* cb) {
killcb(cb);
}
}

View File

@@ -0,0 +1,12 @@
#ifndef __GOOGLE_GLUE_H__
#define __GOOGLE_GLUE_H__
switch_status_t google_dialogflow_init();
switch_status_t google_dialogflow_cleanup();
switch_status_t google_dialogflow_session_init(switch_core_session_t *session, responseHandler_t responseHandler, errorHandler_t errorHandler,
uint32_t samples_per_second, char* lang, char* projectId, char* welcomeEvent, char *text, struct cap_cb **cb);
switch_status_t google_dialogflow_session_stop(switch_core_session_t *session, int channelIsClosing);
switch_bool_t google_dialogflow_frame(switch_media_bug_t *bug, void* user_data);
void destroyChannelUserData(struct cap_cb* cb);
#endif

View File

@@ -0,0 +1,293 @@
/*
*
* mod_dialogflow.c -- Freeswitch module for running a google dialogflow
*
*/
#include "mod_dialogflow.h"
#include "google_glue.h"
#define DEFAULT_INTENT_TIMEOUT_SECS (30)
#define DIALOGFLOW_INTENT "dialogflow_intent"
#define DIALOGFLOW_INTENT_AUDIO_FILE "dialogflow_intent_audio_file"
/* Prototypes */
SWITCH_MODULE_SHUTDOWN_FUNCTION(mod_dialogflow_shutdown);
SWITCH_MODULE_RUNTIME_FUNCTION(mod_dialogflow_runtime);
SWITCH_MODULE_LOAD_FUNCTION(mod_dialogflow_load);
SWITCH_MODULE_DEFINITION(mod_dialogflow, mod_dialogflow_load, mod_dialogflow_shutdown, NULL);
static switch_status_t do_stop(switch_core_session_t *session);
static void responseHandler(switch_core_session_t* session, const char * type, char * json) {
switch_event_t *event;
switch_channel_t *channel = switch_core_session_get_channel(session);
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_INFO, "json payload for type %s: %s.\n", type, json);
switch_event_create_subclass(&event, SWITCH_EVENT_CUSTOM, type);
switch_channel_event_set_data(channel, event);
switch_event_add_body(event, "%s", json);
switch_event_fire(&event);
}
static void errorHandler(switch_core_session_t* session, const char * json) {
switch_event_t *event;
switch_channel_t *channel = switch_core_session_get_channel(session);
switch_event_create_subclass(&event, SWITCH_EVENT_CUSTOM, DIALOGFLOW_EVENT_ERROR);
switch_channel_event_set_data(channel, event);
switch_event_add_body(event, "%s", json);
switch_event_fire(&event);
do_stop(session);
}
static switch_bool_t capture_callback(switch_media_bug_t *bug, void *user_data, switch_abc_type_t type)
{
switch_core_session_t *session = switch_core_media_bug_get_session(bug);
switch (type) {
case SWITCH_ABC_TYPE_INIT:
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_INFO, "Got SWITCH_ABC_TYPE_INIT.\n");
break;
case SWITCH_ABC_TYPE_CLOSE:
{
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_INFO, "Got SWITCH_ABC_TYPE_CLOSE.\n");
google_dialogflow_session_stop(session, 1);
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "Finished SWITCH_ABC_TYPE_CLOSE.\n");
}
break;
case SWITCH_ABC_TYPE_READ:
return google_dialogflow_frame(bug, user_data);
break;
case SWITCH_ABC_TYPE_WRITE:
default:
break;
}
return SWITCH_TRUE;
}
static switch_status_t start_capture(switch_core_session_t *session, switch_media_bug_flag_t flags, char* lang, char*projectId, char* event, char* text)
{
switch_channel_t *channel = switch_core_session_get_channel(session);
switch_media_bug_t *bug;
switch_codec_implementation_t read_impl = { 0 };
struct cap_cb *cb = NULL;
switch_status_t status = SWITCH_STATUS_SUCCESS;
if (switch_channel_get_private(channel, MY_BUG_NAME)) {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_INFO, "a dialogflow is already running on this channel, we will stop it.\n");
do_stop(session);
}
if (switch_channel_pre_answer(channel) != SWITCH_STATUS_SUCCESS) {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "channel must have at least early media to run dialogflow.\n");
status = SWITCH_STATUS_FALSE;
goto done;
}
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_INFO, "starting dialogflow with project %s, language %s, event %s, text %s.\n",
projectId, lang, event, text);
switch_core_session_get_read_impl(session, &read_impl);
if (SWITCH_STATUS_FALSE == google_dialogflow_session_init(session, responseHandler, errorHandler,
read_impl.samples_per_second, lang, projectId, event, text, &cb)) {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Error initializing google dialogflow session.\n");
status = SWITCH_STATUS_FALSE;
goto done;
}
if ((status = switch_core_media_bug_add(session, "dialogflow", NULL, capture_callback, (void *) cb, 0, flags, &bug)) != SWITCH_STATUS_SUCCESS) {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Error adding bug.\n");
status = SWITCH_STATUS_FALSE;
goto done;
}
switch_channel_set_private(channel, MY_BUG_NAME, bug);
done:
if (status == SWITCH_STATUS_FALSE) {
if (cb) destroyChannelUserData(cb);
}
return status;
}
static switch_status_t do_stop(switch_core_session_t *session)
{
switch_status_t status = SWITCH_STATUS_SUCCESS;
switch_channel_t *channel = switch_core_session_get_channel(session);
switch_media_bug_t *bug = switch_channel_get_private(channel, MY_BUG_NAME);
if (bug) {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_INFO, "Received user command command to stop dialogflow.\n");
status = google_dialogflow_session_stop(session, 0);
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_INFO, "stopped dialogflow.\n");
}
return status;
}
#define DIALOGFLOW_API_START_SYNTAX "<uuid> project-id lang-code [event]"
SWITCH_STANDARD_API(dialogflow_api_start_function)
{
char *mycmd = NULL, *argv[10] = { 0 };
int argc = 0;
switch_status_t status = SWITCH_STATUS_FALSE;
switch_media_bug_flag_t flags = SMBF_READ_STREAM | SMBF_READ_STREAM | SMBF_READ_PING;
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_DEBUG, "command %s\n", cmd);
if (!zstr(cmd) && (mycmd = strdup(cmd))) {
argc = switch_separate_string(mycmd, ' ', argv, (sizeof(argv) / sizeof(argv[0])));
}
if (zstr(cmd) || argc < 3) {
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_ERROR, "Error with command %s %s %s.\n", cmd, argv[0], argv[1]);
stream->write_function(stream, "-USAGE: %s\n", DIALOGFLOW_API_START_SYNTAX);
goto done;
} else {
switch_core_session_t *lsession = NULL;
if ((lsession = switch_core_session_locate(argv[0]))) {
char *event = NULL;
char *text = NULL;
char *projectId = argv[1];
char *lang = argv[2];
if (argc > 3) {
event = argv[3];
}
if (argc > 4) {
if (0 == strcmp("none", event)) {
event = NULL;
}
text = argv[4];
}
status = start_capture(lsession, flags, lang, projectId, event, text);
switch_core_session_rwunlock(lsession);
}
}
if (status == SWITCH_STATUS_SUCCESS) {
stream->write_function(stream, "+OK Success\n");
} else {
stream->write_function(stream, "-ERR Operation Failed\n");
}
done:
switch_safe_free(mycmd);
return SWITCH_STATUS_SUCCESS;
}
#define DIALOGFLOW_API_STOP_SYNTAX "<uuid>"
SWITCH_STANDARD_API(dialogflow_api_stop_function)
{
char *mycmd = NULL, *argv[10] = { 0 };
int argc = 0;
switch_status_t status = SWITCH_STATUS_FALSE;
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_DEBUG, "command %s\n", cmd);
if (!zstr(cmd) && (mycmd = strdup(cmd))) {
argc = switch_separate_string(mycmd, ' ', argv, (sizeof(argv) / sizeof(argv[0])));
}
if (zstr(cmd) || argc != 1) {
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_ERROR, "Error with command %s %s %s.\n", cmd, argv[0], argv[1]);
stream->write_function(stream, "-USAGE: %s\n", DIALOGFLOW_API_STOP_SYNTAX);
goto done;
} else {
switch_core_session_t *lsession = NULL;
if ((lsession = switch_core_session_locate(argv[0]))) {
status = do_stop(lsession);
switch_core_session_rwunlock(lsession);
}
}
if (status == SWITCH_STATUS_SUCCESS) {
stream->write_function(stream, "+OK Success\n");
} else {
stream->write_function(stream, "-ERR Operation Failed\n");
}
done:
switch_safe_free(mycmd);
return SWITCH_STATUS_SUCCESS;
}
/* Macro expands to: switch_status_t mod_dialogflow_load(switch_loadable_module_interface_t **module_interface, switch_memory_pool_t *pool) */
SWITCH_MODULE_LOAD_FUNCTION(mod_dialogflow_load)
{
switch_api_interface_t *api_interface;
/* create/register custom event message types */
if (switch_event_reserve_subclass(DIALOGFLOW_EVENT_INTENT) != SWITCH_STATUS_SUCCESS) {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", DIALOGFLOW_EVENT_INTENT);
return SWITCH_STATUS_TERM;
}
if (switch_event_reserve_subclass(DIALOGFLOW_EVENT_TRANSCRIPTION) != SWITCH_STATUS_SUCCESS) {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", DIALOGFLOW_EVENT_TRANSCRIPTION);
return SWITCH_STATUS_TERM;
}
if (switch_event_reserve_subclass(DIALOGFLOW_EVENT_END_OF_UTTERANCE) != SWITCH_STATUS_SUCCESS) {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", DIALOGFLOW_EVENT_END_OF_UTTERANCE);
return SWITCH_STATUS_TERM;
}
if (switch_event_reserve_subclass(DIALOGFLOW_EVENT_AUDIO_PROVIDED) != SWITCH_STATUS_SUCCESS) {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", DIALOGFLOW_EVENT_AUDIO_PROVIDED);
return SWITCH_STATUS_TERM;
}
if (switch_event_reserve_subclass(DIALOGFLOW_EVENT_ERROR) != SWITCH_STATUS_SUCCESS) {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", DIALOGFLOW_EVENT_ERROR);
return SWITCH_STATUS_TERM;
}
/* connect my internal structure to the blank pointer passed to me */
*module_interface = switch_loadable_module_create_module_interface(pool, modname);
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_NOTICE, "Google Dialogflow API loading..\n");
if (SWITCH_STATUS_FALSE == google_dialogflow_init()) {
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_CRIT, "Failed initializing google dialogflow interface\n");
}
switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_NOTICE, "Google Dialogflow API successfully loaded\n");
SWITCH_ADD_API(api_interface, "dialogflow_start", "Start a google dialogflow", dialogflow_api_start_function, DIALOGFLOW_API_START_SYNTAX);
SWITCH_ADD_API(api_interface, "dialogflow_stop", "Terminate a google dialogflow", dialogflow_api_stop_function, DIALOGFLOW_API_STOP_SYNTAX);
switch_console_set_complete("add dialogflow_stop");
switch_console_set_complete("add dialogflow_start project lang");
switch_console_set_complete("add dialogflow_start project lang timeout-secs");
switch_console_set_complete("add dialogflow_start project lang timeout-secs event");
/* indicate that the module should continue to be loaded */
return SWITCH_STATUS_SUCCESS;
}
/*
Called when the system shuts down
Macro expands to: switch_status_t mod_dialogflow_shutdown() */
SWITCH_MODULE_SHUTDOWN_FUNCTION(mod_dialogflow_shutdown)
{
google_dialogflow_cleanup();
switch_event_free_subclass(DIALOGFLOW_EVENT_INTENT);
switch_event_free_subclass(DIALOGFLOW_EVENT_TRANSCRIPTION);
switch_event_free_subclass(DIALOGFLOW_EVENT_END_OF_UTTERANCE);
switch_event_free_subclass(DIALOGFLOW_EVENT_AUDIO_PROVIDED);
switch_event_free_subclass(DIALOGFLOW_EVENT_ERROR);
return SWITCH_STATUS_SUCCESS;
}

View File

@@ -0,0 +1,37 @@
#ifndef __MOD_DIALOGFLOW_H__
#define __MOD_DIALOGFLOW_H__
#include <switch.h>
#include <speex/speex_resampler.h>
#include <unistd.h>
#define MY_BUG_NAME "__dialogflow_bug__"
#define DIALOGFLOW_EVENT_INTENT "dialogflow::intent"
#define DIALOGFLOW_EVENT_TRANSCRIPTION "dialogflow::transcription"
#define DIALOGFLOW_EVENT_AUDIO_PROVIDED "dialogflow::audio_provided"
#define DIALOGFLOW_EVENT_END_OF_UTTERANCE "dialogflow::end_of_utterance"
#define DIALOGFLOW_EVENT_ERROR "dialogflow::error"
#define MAX_LANG (12)
#define MAX_PROJECT_ID (128)
#define MAX_PATHLEN (256)
/* per-channel data */
typedef void (*responseHandler_t)(switch_core_session_t* session, const char * type, char* json);
typedef void (*errorHandler_t)(switch_core_session_t* session, const char * reason);
struct cap_cb {
switch_mutex_t *mutex;
char sessionId[256];
SpeexResamplerState *resampler;
void* streamer;
responseHandler_t responseHandler;
errorHandler_t errorHandler;
switch_thread_t* thread;
char lang[MAX_LANG];
char projectId[MAX_PROJECT_ID];
};
#endif

View File

@@ -0,0 +1,567 @@
#include "parser.h"
#include <switch.h>
template <typename T> cJSON* GRPCParser::parseCollection(const RepeatedPtrField<T> coll) {
cJSON* json = cJSON_CreateArray();
typename RepeatedPtrField<T>::const_iterator it = coll.begin();
for (; it != coll.end(); it++) {
cJSON_AddItemToArray(json, parse(*it));
}
return json;
}
const std::string& GRPCParser::parseAudio(const StreamingDetectIntentResponse& response) {
return response.output_audio();
}
cJSON* GRPCParser::parse(const StreamingDetectIntentResponse& response) {
cJSON * json = cJSON_CreateObject();
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(m_session), SWITCH_LOG_INFO, "GStrGRPCParser - parsing StreamingDetectIntentResponse\n");
// response_id
cJSON_AddItemToObject(json, "response_id",cJSON_CreateString(response.response_id().c_str()));
// recognition_result
if (response.has_recognition_result()) {
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(m_session), SWITCH_LOG_INFO, "GStrGRPCParser - adding recognition result\n");
cJSON_AddItemToObject(json, "recognition_result", parse(response.recognition_result()));
}
// query_result
if (response.has_query_result()) {
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(m_session), SWITCH_LOG_INFO, "GStrGRPCParser - adding query result\n");
cJSON_AddItemToObject(json, "query_result", parse(response.query_result()));
}
// alternative_query_results
cJSON_AddItemToObject(json, "alternative_query_results", parseCollection(response.alternative_query_results()));
// webhook_status
cJSON_AddItemToObject(json, "webhook_status", parse(response.webhook_status()));
//
if (response.has_output_audio_config()) {
switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(m_session), SWITCH_LOG_INFO, "GStrGRPCParser - adding audio config\n");
cJSON_AddItemToObject(json, "output_audio_config", parse(response.output_audio_config()));
}
// XXXX: not doing anything with output_audio for the moment
return json;
}
cJSON* GRPCParser::parse(const OutputAudioEncoding& o) {
return cJSON_CreateString(OutputAudioEncoding_Name(o).c_str());
}
cJSON* GRPCParser::parse(const OutputAudioConfig& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "audio_encoding", parse(o.audio_encoding()));
cJSON_AddItemToObject(json, "sample_rate_hertz", cJSON_CreateNumber(o.sample_rate_hertz()));
cJSON_AddItemToObject(json, "synthesize_speech_config", parse(o.synthesize_speech_config()));
return json;
}
cJSON* GRPCParser::parse(const SynthesizeSpeechConfig& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "speaking_rate", cJSON_CreateNumber(o.speaking_rate()));
cJSON_AddItemToObject(json, "pitch", cJSON_CreateNumber(o.pitch()));
cJSON_AddItemToObject(json, "volume_gain_db", cJSON_CreateNumber(o.volume_gain_db()));
cJSON_AddItemToObject(json, "effects_profile_id", parseCollection(o.effects_profile_id()));
cJSON_AddItemToObject(json, "voice", parse(o.voice()));
return json;
}
cJSON* GRPCParser::parse(const SsmlVoiceGender& o) {
return cJSON_CreateString(SsmlVoiceGender_Name(o).c_str());
}
cJSON* GRPCParser::parse(const VoiceSelectionParams& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "name", cJSON_CreateString(o.name().c_str()));
cJSON_AddItemToObject(json, "ssml_gender", parse(o.ssml_gender()));
return json;
}
cJSON* GRPCParser::parse(const google::rpc::Status& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "code", cJSON_CreateNumber(o.code()));
cJSON_AddItemToObject(json, "message", cJSON_CreateString(o.message().c_str()));
return json;
}
cJSON* GRPCParser::parse(const Value& value) {
cJSON* json = NULL;
switch (value.kind_case()) {
case Value::KindCase::kNullValue:
json = cJSON_CreateNull();
break;
case Value::KindCase::kNumberValue:
json = cJSON_CreateNumber(value.number_value());
break;
case Value::KindCase::kStringValue:
json = cJSON_CreateString(value.string_value().c_str());
break;
case Value::KindCase::kBoolValue:
json = cJSON_CreateBool(value.bool_value());
break;
case Value::KindCase::kStructValue:
json = parse(value.struct_value());
break;
case Value::KindCase::kListValue:
{
const ListValue& list = value.list_value();
json = cJSON_CreateArray();
for (int i = 0; i < list.values_size(); i++) {
const Value& val = list.values(i);
cJSON_AddItemToArray(json, parse(val));
}
}
break;
}
return json;
}
cJSON* GRPCParser::parse(const Struct& rpcStruct) {
cJSON* json = cJSON_CreateObject();
for (StructIterator_t it = rpcStruct.fields().begin(); it != rpcStruct.fields().end(); it++) {
const std::string& key = it->first;
const Value& value = it->second;
cJSON_AddItemToObject(json, key.c_str(), parse(value));
}
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_SimpleResponse& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "ssml", cJSON_CreateString(o.ssml().c_str()));
cJSON_AddItemToObject(json, "text_to_speech", cJSON_CreateString(o.text_to_speech().c_str()));
cJSON_AddItemToObject(json, "display_text", cJSON_CreateString(o.display_text().c_str()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_SimpleResponses& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "simple_responses", parseCollection(o.simple_responses()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_Image& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "accessibility_text", cJSON_CreateString(o.accessibility_text().c_str()));
cJSON_AddItemToObject(json, "image_uri", cJSON_CreateString(o.image_uri().c_str()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_BasicCard_Button_OpenUriAction& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "uri", cJSON_CreateString(o.uri().c_str()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_BasicCard_Button& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "title", cJSON_CreateString(o.title().c_str()));
cJSON_AddItemToObject(json, "open_uri_action", parse(o.open_uri_action()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_Card_Button& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "text", cJSON_CreateString(o.text().c_str()));
cJSON_AddItemToObject(json, "postback", parse(o.postback()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_BasicCard& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "title", cJSON_CreateString(o.title().c_str()));
cJSON_AddItemToObject(json, "subtitle", cJSON_CreateString(o.subtitle().c_str()));
cJSON_AddItemToObject(json, "formatted_text", cJSON_CreateString(o.formatted_text().c_str()));
cJSON_AddItemToObject(json, "image", parse(o.image()));
cJSON_AddItemToObject(json, "buttons", parseCollection(o.buttons()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_Card& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "title", cJSON_CreateString(o.title().c_str()));
cJSON_AddItemToObject(json, "subtitle", cJSON_CreateString(o.subtitle().c_str()));
cJSON_AddItemToObject(json, "image_uri", cJSON_CreateString(o.image_uri().c_str()));
cJSON_AddItemToObject(json, "buttons", parseCollection(o.buttons()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_Suggestion& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "title", cJSON_CreateString(o.title().c_str()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_Suggestions& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "suggestions", parseCollection(o.suggestions()));
return json;
}
cJSON* GRPCParser::parse(const std::string& val) {
return cJSON_CreateString(val.c_str());
}
cJSON* GRPCParser::parse(const Intent_Message_LinkOutSuggestion& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "destination_name", cJSON_CreateString(o.destination_name().c_str()));
cJSON_AddItemToObject(json, "uri", cJSON_CreateString(o.uri().c_str()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_SelectItemInfo& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "key", cJSON_CreateString(o.key().c_str()));
cJSON_AddItemToObject(json, "synonyms", parseCollection(o.synonyms()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_ListSelect_Item& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "info", parse(o.info()));
cJSON_AddItemToObject(json, "title", cJSON_CreateString(o.title().c_str()));
cJSON_AddItemToObject(json, "description", cJSON_CreateString(o.description().c_str()));
cJSON_AddItemToObject(json, "image", parse(o.image()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_CarouselSelect& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "items", parseCollection(o.items()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_CarouselSelect_Item& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "info", parse(o.info()));
cJSON_AddItemToObject(json, "title", cJSON_CreateString(o.title().c_str()));
cJSON_AddItemToObject(json, "description", cJSON_CreateString(o.description().c_str()));
cJSON_AddItemToObject(json, "image", parse(o.image()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_ListSelect& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "title", cJSON_CreateString(o.title().c_str()));
cJSON_AddItemToObject(json, "items", parseCollection(o.items()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_TelephonyPlayAudio& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "audio_uri", cJSON_CreateString(o.audio_uri().c_str()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_TelephonySynthesizeSpeech& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "text", cJSON_CreateString(o.text().c_str()));
cJSON_AddItemToObject(json, "ssml", cJSON_CreateString(o.ssml().c_str()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_TelephonyTransferCall& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "phone_number", cJSON_CreateString(o.phone_number().c_str()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_QuickReplies& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "title", cJSON_CreateString(o.title().c_str()));
cJSON_AddItemToObject(json, "quick_replies", parseCollection(o.quick_replies()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message_Text& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "text", parseCollection(o.text()));
return json;
}
cJSON* GRPCParser::parse(const Intent_TrainingPhrase_Part& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "text", cJSON_CreateString(o.text().c_str()));
cJSON_AddItemToObject(json, "entity_type", cJSON_CreateString(o.entity_type().c_str()));
cJSON_AddItemToObject(json, "alias", cJSON_CreateString(o.alias().c_str()));
cJSON_AddItemToObject(json, "user", cJSON_CreateBool(o.user_defined()));
return json;
}
cJSON* GRPCParser::parse(const Intent_WebhookState& o) {
return cJSON_CreateString(Intent_WebhookState_Name(o).c_str());
}
cJSON* GRPCParser::parse(const Intent_TrainingPhrase_Type& o) {
return cJSON_CreateString(Intent_TrainingPhrase_Type_Name(o).c_str());
}
cJSON* GRPCParser::parse(const Intent_TrainingPhrase& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "name", cJSON_CreateString(o.name().c_str()));
cJSON_AddItemToObject(json, "type", parse(o.type()));
cJSON_AddItemToObject(json, "parts", parseCollection(o.parts()));
cJSON_AddItemToObject(json, "times_added_count", cJSON_CreateNumber(o.times_added_count()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Parameter& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "name", cJSON_CreateString(o.name().c_str()));
cJSON_AddItemToObject(json, "display_name", cJSON_CreateString(o.display_name().c_str()));
cJSON_AddItemToObject(json, "value", cJSON_CreateString(o.value().c_str()));
cJSON_AddItemToObject(json, "default_value", cJSON_CreateString(o.default_value().c_str()));
cJSON_AddItemToObject(json, "entity_type_display_name", cJSON_CreateString(o.entity_type_display_name().c_str()));
cJSON_AddItemToObject(json, "mandatory", cJSON_CreateBool(o.mandatory()));
cJSON_AddItemToObject(json, "prompts", parseCollection(o.prompts()));
cJSON_AddItemToObject(json, "is_list", cJSON_CreateBool(o.is_list()));
return json;
}
cJSON* GRPCParser::parse(const Intent_FollowupIntentInfo& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "followup_intent_name", cJSON_CreateString(o.followup_intent_name().c_str()));
cJSON_AddItemToObject(json, "parent_followup_intent_name", cJSON_CreateString(o.parent_followup_intent_name().c_str()));
return json;
}
cJSON* GRPCParser::parse(const Sentiment& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "score", cJSON_CreateNumber(o.score()));
cJSON_AddItemToObject(json, "magnitude", cJSON_CreateNumber(o.magnitude()));
return json;
}
cJSON* GRPCParser::parse(const SentimentAnalysisResult& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "query_text_sentiment", parse(o.query_text_sentiment()));
return json;
}
cJSON* GRPCParser::parse(const KnowledgeAnswers_Answer_MatchConfidenceLevel& o) {
return cJSON_CreateString(KnowledgeAnswers_Answer_MatchConfidenceLevel_Name(o).c_str());
}
cJSON* GRPCParser::parse(const KnowledgeAnswers_Answer& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "source", cJSON_CreateString(o.source().c_str()));
cJSON_AddItemToObject(json, "faq_question", cJSON_CreateString(o.faq_question().c_str()));
cJSON_AddItemToObject(json, "answer", cJSON_CreateString(o.answer().c_str()));
cJSON_AddItemToObject(json, "match_confidence_level", parse(o.match_confidence_level()));
cJSON_AddItemToObject(json, "match_confidence", cJSON_CreateNumber(o.match_confidence()));
return json;
}
cJSON* GRPCParser::parse(const KnowledgeAnswers& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "answers", parseCollection(o.answers()));
return json;
}
cJSON* GRPCParser::parse(const Intent& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "name", cJSON_CreateString(o.name().c_str()));
cJSON_AddItemToObject(json, "display_name", cJSON_CreateString(o.display_name().c_str()));
cJSON_AddItemToObject(json, "webhook_state", parse(o.webhook_state()));
cJSON_AddItemToObject(json, "priority", cJSON_CreateNumber(o.priority()));
cJSON_AddItemToObject(json, "is_fallback", cJSON_CreateBool(o.is_fallback()));
cJSON_AddItemToObject(json, "ml_disabled", cJSON_CreateBool(o.ml_disabled()));
cJSON_AddItemToObject(json, "end_interaction", cJSON_CreateBool(o.end_interaction()));
cJSON_AddItemToObject(json, "input_context_names", parseCollection(o.input_context_names()));
cJSON_AddItemToObject(json, "events", parseCollection(o.events()));
cJSON_AddItemToObject(json, "training_phrases", parseCollection(o.training_phrases()));
cJSON_AddItemToObject(json, "action", cJSON_CreateString(o.action().c_str()));
cJSON_AddItemToObject(json, "output_contexts", parseCollection(o.output_contexts()));
cJSON_AddItemToObject(json, "reset_contexts", cJSON_CreateBool(o.reset_contexts()));
cJSON_AddItemToObject(json, "parameters", parseCollection(o.parameters()));
cJSON_AddItemToObject(json, "messages", parseCollection(o.messages()));
cJSON* j = cJSON_CreateArray();
for (int i = 0; i < o.default_response_platforms_size(); i++) {
cJSON_AddItemToArray(j, cJSON_CreateString(Intent_Message_Platform_Name(o.default_response_platforms(i)).c_str()));
}
cJSON_AddItemToObject(json, "default_response_platforms", j);
cJSON_AddItemToObject(json, "root_followup_intent_name", cJSON_CreateString(o.root_followup_intent_name().c_str()));
cJSON_AddItemToObject(json, "followup_intent_info", parseCollection(o.followup_intent_info()));
return json;
}
cJSON* GRPCParser::parse(const google::cloud::dialogflow::v2beta1::Context& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "name", cJSON_CreateString(o.name().c_str()));
cJSON_AddItemToObject(json, "lifespan_count", cJSON_CreateNumber(o.lifespan_count()));
if (o.has_parameters()) cJSON_AddItemToObject(json, "parameters", parse(o.parameters()));
return json;
}
cJSON* GRPCParser::parse(const Intent_Message& msg) {
cJSON * json = cJSON_CreateObject();
auto platform = msg.platform();
cJSON_AddItemToObject(json, "platform", cJSON_CreateString(Intent_Message_Platform_Name(platform).c_str()));
if (msg.has_text()) {
cJSON_AddItemToObject(json, "text", parse(msg.text()));
}
if (msg.has_image()) {
cJSON_AddItemToObject(json, "image", parse(msg.image()));
}
if (msg.has_quick_replies()) {
cJSON_AddItemToObject(json, "quick_replies", parse(msg.quick_replies()));
}
if (msg.has_card()) {
cJSON_AddItemToObject(json, "card", parse(msg.card()));
}
if (msg.has_payload()) {
cJSON_AddItemToObject(json, "payload", parse(msg.payload()));
}
if (msg.has_simple_responses()) {
cJSON_AddItemToObject(json, "simple_responses", parse(msg.simple_responses()));
}
if (msg.has_basic_card()) {
cJSON_AddItemToObject(json, "basic_card", parse(msg.card()));
}
if (msg.has_suggestions()) {
cJSON_AddItemToObject(json, "suggestions", parse(msg.suggestions()));
}
if (msg.has_link_out_suggestion()) {
cJSON_AddItemToObject(json, "link_out_suggestion", parse(msg.link_out_suggestion()));
}
if (msg.has_list_select()) {
cJSON_AddItemToObject(json, "list_select", parse(msg.list_select()));
}
if (msg.has_telephony_play_audio()) {
cJSON_AddItemToObject(json, "telephony_play_audio", parse(msg.telephony_play_audio()));
}
if (msg.has_telephony_synthesize_speech()) {
cJSON_AddItemToObject(json, "telephony_synthesize_speech", parse(msg.telephony_synthesize_speech()));
}
if (msg.has_telephony_transfer_call()) {
cJSON_AddItemToObject(json, "telephony_transfer_call", parse(msg.telephony_transfer_call()));
}
return json;
}
cJSON* GRPCParser::parse(const QueryResult& qr) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "query_text", cJSON_CreateString(qr.query_text().c_str()));
cJSON_AddItemToObject(json, "language_code", cJSON_CreateString(qr.language_code().c_str()));
cJSON_AddItemToObject(json, "speech_recognition_confidence", cJSON_CreateNumber(qr.speech_recognition_confidence()));
cJSON_AddItemToObject(json, "action", cJSON_CreateString(qr.action().c_str()));
cJSON_AddItemToObject(json, "parameters", parse(qr.parameters()));
cJSON_AddItemToObject(json, "all_required_params_present", cJSON_CreateBool(qr.all_required_params_present()));
cJSON_AddItemToObject(json, "fulfillment_text", cJSON_CreateString(qr.fulfillment_text().c_str()));
cJSON_AddItemToObject(json, "fulfillment_messages", parseCollection(qr.fulfillment_messages()));
cJSON_AddItemToObject(json, "webhook_source", cJSON_CreateString(qr.webhook_source().c_str()));
if (qr.has_webhook_payload()) cJSON_AddItemToObject(json, "webhook_payload", parse(qr.webhook_payload()));
cJSON_AddItemToObject(json, "output_contexts", parseCollection(qr.output_contexts()));
cJSON_AddItemToObject(json, "intent", parse(qr.intent()));
cJSON_AddItemToObject(json, "intent_detection_confidence", cJSON_CreateNumber(qr.intent_detection_confidence()));
if (qr.has_diagnostic_info()) cJSON_AddItemToObject(json, "diagnostic_info", parse(qr.diagnostic_info()));
cJSON_AddItemToObject(json, "sentiment_analysis_result", parse(qr.sentiment_analysis_result()));
cJSON_AddItemToObject(json, "knowledge_answers", parse(qr.knowledge_answers()));
return json;
}
cJSON* GRPCParser::parse(const StreamingRecognitionResult_MessageType& o) {
return cJSON_CreateString(StreamingRecognitionResult_MessageType_Name(o).c_str());
}
cJSON* GRPCParser::parse(const StreamingRecognitionResult& o) {
cJSON * json = cJSON_CreateObject();
cJSON_AddItemToObject(json, "message_type", parse(o.message_type()));
cJSON_AddItemToObject(json, "transcript", cJSON_CreateString(o.transcript().c_str()));
cJSON_AddItemToObject(json, "is_final", cJSON_CreateBool(o.is_final()));
cJSON_AddItemToObject(json, "confidence", cJSON_CreateNumber(o.confidence()));
return json;
}

137
mod_dialogflow_cx/parser.h Normal file
View File

@@ -0,0 +1,137 @@
#ifndef __PARSER_H__
#define __PARSER_H__
#include <switch_json.h>
#include <grpc++/grpc++.h>
#include "google/cloud/dialogflow/v2beta1/session.grpc.pb.h"
using google::cloud::dialogflow::v2beta1::Sessions;
using google::cloud::dialogflow::v2beta1::StreamingDetectIntentRequest;
using google::cloud::dialogflow::v2beta1::StreamingDetectIntentResponse;
using google::cloud::dialogflow::v2beta1::AudioEncoding;
using google::cloud::dialogflow::v2beta1::InputAudioConfig;
using google::cloud::dialogflow::v2beta1::OutputAudioConfig;
using google::cloud::dialogflow::v2beta1::SynthesizeSpeechConfig;
using google::cloud::dialogflow::v2beta1::VoiceSelectionParams;
using google::cloud::dialogflow::v2beta1::SsmlVoiceGender;
using google::cloud::dialogflow::v2beta1::SsmlVoiceGender_Name;
using google::cloud::dialogflow::v2beta1::QueryInput;
using google::cloud::dialogflow::v2beta1::QueryResult;
using google::cloud::dialogflow::v2beta1::StreamingRecognitionResult;
using google::cloud::dialogflow::v2beta1::StreamingRecognitionResult_MessageType;
using google::cloud::dialogflow::v2beta1::StreamingRecognitionResult_MessageType_Name;
using google::cloud::dialogflow::v2beta1::EventInput;
using google::cloud::dialogflow::v2beta1::OutputAudioEncoding;
using google::cloud::dialogflow::v2beta1::OutputAudioEncoding_Name;
using google::cloud::dialogflow::v2beta1::Context;
using google::cloud::dialogflow::v2beta1::Sentiment;
using google::cloud::dialogflow::v2beta1::SentimentAnalysisResult;
using google::cloud::dialogflow::v2beta1::KnowledgeAnswers;
using google::cloud::dialogflow::v2beta1::KnowledgeAnswers_Answer;
using google::cloud::dialogflow::v2beta1::KnowledgeAnswers_Answer_MatchConfidenceLevel;
using google::cloud::dialogflow::v2beta1::KnowledgeAnswers_Answer_MatchConfidenceLevel_Name;
using google::cloud::dialogflow::v2beta1::Intent;
using google::cloud::dialogflow::v2beta1::Intent_FollowupIntentInfo;
using google::cloud::dialogflow::v2beta1::Intent_WebhookState;
using google::cloud::dialogflow::v2beta1::Intent_WebhookState_Name;
using google::cloud::dialogflow::v2beta1::Intent_Parameter;
using google::cloud::dialogflow::v2beta1::Intent_TrainingPhrase;
using google::cloud::dialogflow::v2beta1::Intent_TrainingPhrase_Type;
using google::cloud::dialogflow::v2beta1::Intent_TrainingPhrase_Part;
using google::cloud::dialogflow::v2beta1::Intent_TrainingPhrase_Type_Name;
using google::cloud::dialogflow::v2beta1::Intent_Message;
using google::cloud::dialogflow::v2beta1::Intent_Message_QuickReplies;
using google::cloud::dialogflow::v2beta1::Intent_Message_Platform_Name;
using google::cloud::dialogflow::v2beta1::Intent_Message_SimpleResponses;
using google::cloud::dialogflow::v2beta1::Intent_Message_SimpleResponse;
using google::cloud::dialogflow::v2beta1::Intent_Message_BasicCard;
using google::cloud::dialogflow::v2beta1::Intent_Message_Card;
using google::cloud::dialogflow::v2beta1::Intent_Message_Image;
using google::cloud::dialogflow::v2beta1::Intent_Message_Text;
using google::cloud::dialogflow::v2beta1::Intent_Message_Card_Button;
using google::cloud::dialogflow::v2beta1::Intent_Message_BasicCard_Button;
using google::cloud::dialogflow::v2beta1::Intent_Message_BasicCard_Button_OpenUriAction;
using google::cloud::dialogflow::v2beta1::Intent_Message_Suggestion;
using google::cloud::dialogflow::v2beta1::Intent_Message_Suggestions;
using google::cloud::dialogflow::v2beta1::Intent_Message_LinkOutSuggestion;
using google::cloud::dialogflow::v2beta1::Intent_Message_ListSelect;
using google::cloud::dialogflow::v2beta1::Intent_Message_CarouselSelect;
using google::cloud::dialogflow::v2beta1::Intent_Message_CarouselSelect_Item;
using google::cloud::dialogflow::v2beta1::Intent_Message_ListSelect_Item;
using google::cloud::dialogflow::v2beta1::Intent_Message_SelectItemInfo;
using google::cloud::dialogflow::v2beta1::Intent_Message_TelephonyPlayAudio;
using google::cloud::dialogflow::v2beta1::Intent_Message_TelephonySynthesizeSpeech;
using google::cloud::dialogflow::v2beta1::Intent_Message_TelephonyTransferCall;
using google::protobuf::RepeatedPtrField;
using google::rpc::Status;
using google::protobuf::Struct;
using google::protobuf::Value;
using google::protobuf::ListValue;
typedef google::protobuf::Map< std::string, Value >::const_iterator StructIterator_t;
class GRPCParser {
public:
GRPCParser(switch_core_session_t *session) : m_session(session) {}
~GRPCParser() {}
template <typename T> cJSON* parseCollection(const RepeatedPtrField<T> coll) ;
cJSON* parse(const StreamingDetectIntentResponse& response) ;
const std::string& parseAudio(const StreamingDetectIntentResponse& response);
cJSON* parse(const OutputAudioEncoding& o) ;
cJSON* parse(const OutputAudioConfig& o) ;
cJSON* parse(const SynthesizeSpeechConfig& o) ;
cJSON* parse(const SsmlVoiceGender& o) ;
cJSON* parse(const VoiceSelectionParams& o) ;
cJSON* parse(const google::rpc::Status& o) ;
cJSON* parse(const Value& value) ;
cJSON* parse(const Struct& rpcStruct) ;
cJSON* parse(const Intent_Message_SimpleResponses& o) ;
cJSON* parse(const Intent_Message_SimpleResponse& o) ;
cJSON* parse(const Intent_Message_Image& o) ;
cJSON* parse(const Intent_Message_BasicCard_Button_OpenUriAction& o) ;
cJSON* parse(const Intent_Message_BasicCard_Button& o) ;
cJSON* parse(const Intent_Message_Card_Button& o) ;
cJSON* parse(const Intent_Message_BasicCard& o) ;
cJSON* parse(const Intent_Message_Card& o) ;
cJSON* parse(const Intent_Message_Suggestion& o) ;
cJSON* parse(const Intent_Message_Suggestions& o) ;
cJSON* parse(const std::string& val) ;
cJSON* parse(const Intent_Message_LinkOutSuggestion& o) ;
cJSON* parse(const Intent_Message_SelectItemInfo& o) ;
cJSON* parse(const Intent_Message_ListSelect_Item& o) ;
cJSON* parse(const Intent_Message_CarouselSelect& o) ;
cJSON* parse(const Intent_Message_CarouselSelect_Item& o) ;
cJSON* parse(const Intent_Message_ListSelect& o) ;
cJSON* parse(const Intent_Message_TelephonyPlayAudio& o) ;
cJSON* parse(const Intent_Message_TelephonySynthesizeSpeech& o) ;
cJSON* parse(const Intent_Message_TelephonyTransferCall& o) ;
cJSON* parse(const Intent_Message_QuickReplies& o) ;
cJSON* parse(const Intent_Message_Text& o) ;
cJSON* parse(const Intent_TrainingPhrase_Part& o) ;
cJSON* parse(const Intent_WebhookState& o) ;
cJSON* parse(const Intent_TrainingPhrase_Type& o) ;
cJSON* parse(const Intent_TrainingPhrase& o) ;
cJSON* parse(const Intent_Parameter& o) ;
cJSON* parse(const Intent_FollowupIntentInfo& o) ;
cJSON* parse(const Sentiment& o) ;
cJSON* parse(const SentimentAnalysisResult& o) ;
cJSON* parse(const KnowledgeAnswers_Answer_MatchConfidenceLevel& o) ;
cJSON* parse(const KnowledgeAnswers_Answer& o) ;
cJSON* parse(const KnowledgeAnswers& o) ;
cJSON* parse(const Intent& o) ;
cJSON* parse(const google::cloud::dialogflow::v2beta1::Context& o) ;
cJSON* parse(const Intent_Message& msg) ;
cJSON* parse(const QueryResult& qr) ;
cJSON* parse(const StreamingRecognitionResult_MessageType& o) ;
cJSON* parse(const StreamingRecognitionResult& o) ;
private:
switch_core_session_t *m_session;
} ;
#endif