NAV Navbar

How To / Tutorials

-A triniti Guide

Overview

This tutorial walks you through the process of building a workspace from scratch. It also provides you the best practices in designing a workspace.

Manage Your Account

Account Setup

Signup for a New Account

You can register to triniti.ai by following these steps.

  1. Open URL: https://developer.triniti.ai/
  2. Click on the “Create Account” link.
  3. Enter First name, Last name, Email id, Mobile number & your desired password. (CreateAccount1.png) (CreateAccount2.png) ... (Note: All fields are mandatory)
  4. Password should be
  1. Click on the “Create Account” button.
  2. An email notification will be sent to the registered email address.(CreateAccount3.png)(CreateAccount4.png)
  3. Access your registered email & click on the “Verify Email” button.
  4. Verification will be completed successfully & the Home page will be displayed. (CreateAccount5.png)

At the moment, we accept only sign ups using corporate email addresses. Triniti portal does not recognise emails created via free service such as example@gmail.com, example@yahoo.com or any other similar Id.

While creating your password, make sure to make it at least eight characters long with at least one special character.

A verification email will be sent to the email address you have used, to create your triniti account. Open the email and follow the instructions given to complete creating your account. This will lead you to the Triniti website’s home page.

Congratulations. You have created your first triniti account!

We are so happy to welcome you onboard. Eagerly looking forward to collaborate and grow along with you!

Login to your triniti account

If you already have an account, select the ‘login’ button on top where it reads "Already have an account? Login"

Recover your Triniti account

To recover your triniti account,

An email will be sent to your mail address. Follow the instructions given in the email to recover your triniti account.

  1. Open a browser on your laptop/desktop.
  2. Enter the URL & press enter.
  3. Click on the “Recover Account”
  4. Enter your registered email address & “Send email” button.(RecoverAccount1.png)

Create Your First Workspace

Workspace as the name indicates, is a space that triniti provides you for you to do the bot building process. It is similar to a digital folder that you will be provided with, upon purchase to start with building your bot.

Before you could start, make sure you have a Triniti.ai account. If you do not have an account, go to triniti’s login page and create an account.

This section describes how to create and try out your first triniti.ai workspace.

Create Workspace

Refer to (project types)[#projecttypes] to understand each of its features.

Depending on the plan you choose, the costs may vary. Look into the (pricing section)[#pricingsection] for details.

Once you have chosen your plan, go ahead and create your first workspace by selecting the ‘Create Workspace’ button.

Congratulations!!! You have now created your very first workspace !!!

Manage Your Workspace

Manage users for a Workspace

workbook

workbook

workbook

workbook

workbook

Delete a Workspace

workbook

workbook

Manage Settings

workbook

workbook

Manage Messages

workbook

workbook

workbook

Manage FAQ

Adding New FAQ's

How to setup FAQ (import from excel, csv or from website URL)?

  1. Click on workshop name, and import the FAQ to Train
  2. On Importing the File, the list of Added FAQ is shown in the FAQ page
  3. On importing from URL, copy paste the website FAQ page URL
  4. Select Import button (ImportFAQ1.png)

How to change the FAQ response format?

  1. Add New question in user asks field, add the answer in Bot Answer field and select default channel.
  2. Click on Add Question Set. (FAQResponseFormat1.png)
  3. Click on Change format to view different available formats. (ImportFAQ3.png)

A. Plain Text:

B. Buttons:

C. Carousel / List:

D. Image:

E. SSML:

How to add Quick replies:

How to add Smalltalk?

a. Select Smalltalk option from the left menu b. Click on view to edit the answer. (SmallTalk1.png)

How to set the minimum confidence score?

We can set the Minimum confidence score to get the answers from bot. If user utterances are less than the minimum confidence score configured then then error message will be displayed.

a. Select the Setting icon image displayed beside bot in left menu. b. Select General tab. c. Set minimum confidence score as 65. d. Configure “Default Error” message from messages tab. (General1.png) (General2.png) (General3.png)

How to customise websSDK?

We can customise the webSDK look and feel.

a. Select “Channels” from left menu b. Select the “Settings” icon displayed in webSDK channel. (Channel1.png) (Channel2.png) (Channel3.png)

Managing KeyPhrases

Handling Unknown Words

Formatting FAQ Responses

Redirecting FAQ's to Workflows

Setting up Channel Specific Responses

Handling Ambiguity

Managing FAQ Settings

Fine Tuning FAQ's

Define Intents

Adding Intents

workbook

workbook

workbook

Annotating & Linking Entities

Pending

Importing & Exporting Intents

workbook

-System will display intents imported successfully after deployment.

-workbook

workbook

Manage Dialogs

Define Entities

Adding Entities

  1. Click on workspace.
  2. Click on Entities option.
  3. Click on + add Entity button.
  4. Enter entity name, entity value and chose type of entity.
  5. Click on save button on right corner to save.

Managing Dictionary Entities

  1. Click on workspace.
  2. Click on Entities option.
  3. Click on delete button to delete entities from our workspace.
  4. Click on save to save our changes.

Managing XXX Entities

Pending

Importing & Exporting Entities

  1. Click on workspace.
  2. Click on Entities option.
  3. Click on export or import icons at the right corner to import / export entities.
  4. Click on save button after importing.

Default Entities

Pending

Manage Small Talk

Define Acronymns

Manage Fulfillment via Webhooks

Introduction

A webhook is nothing but an endpoint or an API that can be summoned to fulfill a particular task or in AI terminology, a particular intent. The primary motive to have a webhook as a mode of fulfillment is to call an API that can be written in any programming language and be hosted on a server and be accessible irrespective of the scope of the classes calling it.

We'll go through this feature to see how it can be leveraged to define a fulfillment for your intent.

Defining a Webhook

A webhook is reasonably easy to define. It just requires a URL and a secret key that is used to validate the requester of the API is accessed. Triniti.ai leverages the use of this feature to support vanilla fulfillment via webhooks and as part of other fulfillment as well, namely, Workflow. One can define a webhook as fulfillment for intent and in the same way a webhook can be called for a particular node's implementation within a workflow. Refer Workflows for a better understanding of workflow basics.

Webhook Signature

A Webhook signature has two components, namely,

To create a fulfillment via webhook for a particular intent you can click on the Call Webhook button seen at the bottom of the page once you navigate to any intent, along with other fulfillment options.

<Setup webhook fulfillment>

Conversational Workflow Framework

Please refer to the article on workflow for the detailed understanding of dialog flow management in Triniti.ai.

Other than acting as a method of fulfillment for an intent all by itself, webhook can also be leveraged to support certain parts of steps within a workflow. As we know that workflow is essentially a sequential and logical implementation of steps (nodes), it might be required to have multiple logical checks to be made to decide which step (node) to call next. It may not always be the best idea to perform these tasks within the static scripts within a node, or worse in multiple nodes. This is where webhooks come into the picture.

You can define a webhook to implement prompt, validation or connections of a node in a workflow. The definition/signature of the webhook is the same as above, i.e., a URL and a secret key, just the purpose changes.

<Setup webhook in a workflow node>

Events

All webhook events to have a similar request body. Just for the workflow events request to have an extra workflow object. The available webhook events are listed below.

Event Description
fulfilment For any message if handled by global or the intent based webhook.
wf_validation Inside workflow execution for user input validation.
wf_u_validation Inside workflow execution for validation of updated value.
wf_connection Inside workflow execution for connection.
wf_prompt Inside workflow execution for prompt.

Webhook Request

Your webhook to receive a POST request from Triniti.AI. For each message from a user, this webhook will be called, depending on webhook is configured for across bot or per intent. This request format is chosen to simplify the response parsing on the service side to handle multiple channels.

A request to comprise of following fields to give you details about the bot, user profile, user request, and NLP. For text requests, request body to have all the enabled fields for NLP, for postback requests you may get few of them.

Generic Webhook Request Format

{
    "id": "mid.ql391eni",
    "event": "wf_validation",
    "user": {
        "id": "11229",
        "profile": {}
    },
    "bot": {
        "id": "1874",
        "channel_type": "W",
        "channel_id": "1874w20420077206",
        "developer_mode": true,
        "sync": true
    },
    "request": {
        "type": "text",
        "text": "mumbai"
    },
    "nlp": {
        "version": "v1",
        "data": {
            "processedMessage": "mumbai",
            "intent": {
                "name": "txn-bookflight",
                "confidence": 100.0
            },
            "entities": {
                "intentModifier": [{
                    "name": "intentModifier",
                    "value": null,
                    "modifiers": []
                }],
                "source": [{
                    "name": "source",
                    "value": "mumbai",
                    "modifiers": null
                }]
            },
            "debug": [{
                "faq-subtopic-confidence": 0.0,
                "faq-topic-confidence": 0.0
            }],
            "semantics": [{
                "sentence-type": "instruction",
                "event-tense": "present",
                "semantic-parse": "location:DESCRIPTION[]"
            }]
        }
    },
    "workflow": {
        "additionalParams": {
        },
        "workflowVariables": {
            "modifier_intentModifier": "",
            "modifier_destination": ""
        },
        "globalVariables": null,
        "requestVariables": {
            "intentModifier": "null",
            "source": "mumbai"
        },
        "nodeId": "Source",
        "workflowId": "bf9f3713-7921-4927-8a40-5876b1012543"
    }
  }

Request Body

Property Type Description
user Object User object. User details acquired from that particular channel
time String Timestamp of the request
request Object Request object. User Request Details
nlp Object NLP object. Natural Language Processing information about the request
id String Unique ID for each request
event String Event Type
bot Object Bot object. Bot details
workflow Object Workflow object. Only for requests made from workflow.

User Profile

Property Type Description
id String Channel User ID
profile Object Profile information acquired from the Channel

Bot Details

Property Type Description
id String Triniti AI Bot ID
channel_type String Channel type
channel_id String Channel ID for the Bot
developer_mode Boolean Developer or Live mode
language_code String Bot language code
sync Boolean Channel is sync or async

Natural Language Processing

Property Type Description
version String Triniti API version
body Object NLP body fields depends on Triniti API version

Workflow Object

Property Type Description
workflowId String Unique ID for workflow
nodeId String Unique ID for workflow node
requestVariables Object Local request variables
workflowVariables Object Variables persisted across workflow
globalVariables Object Variables persisted across session
additionalParams Object Some additional data

You can use Webhook Java library to parse the request.

Webhook Response

Webhook response has most of the generic components, but some are specific to its implementation within the workflow.

Following is the expected webhook response structure.

Generic Webhook Response Format

{
    "messages": [{
        "type": "text",
        "content": "<text_message>",
        "quick_replies": [{
            "type": "text",
            "title": "Search",
            "payload": "<POSTBACK_PAYLOAD>",
            "image_url": "http://example.com/img/red.png"
        }, {
            "type": "location"
        }]
    }, {
        "type": "list",
        "content": {
            "list": [{
                "title": "",
                "subtitle": "",
                "image": "",
                "buttons": [{
                    "title": "",
                    "type": "<postback|weburl|>",
                    "webview_type": "<COMPACT,TALL,FULL>",
                    "auth_required": "",
                    "life": "",
                    "payload": "",
                    "postback": "",
                    "intent": "",
                    "extra_payload" :"",
                    "message": ""
                }]
            }],
            "buttons": []
        },
        "quick_replies": []
    }, {
        "type": "button",
        "content": {
            "title": "",
            "buttons": []
        },
        "quick_replies": []
    }, {
        "type": "carousel",
        "content": [{
            "title": "",
            "subtitle": "",
            "image": "",
            "buttons": []
        }],
        "quick_replies": []
    }, {
        "type": "image",
        "content": "",
        "quick_replies": []
    }, {
        "type": "video",
        "content": "",
        "quick_replies": []
    }, {
        "type": "custom",
        "content": {}
    }],
    "render": "<WEBVIEW|BOT>",
    "keyboard_state": "<ALPHA|NUM|NONE|HIDE|PWD>",
    "status": "<SUCCESS|FAILED|TFA_PENDING|TFA_SUCCESS|TFA_FAILURE|PENDING|LOGIN_PENDING>",
    "expected_entities": [],
    "extra_data": [],
    "audit": {
        "sub_intent": "",
        "step": "",
        "transaction_id": "",
        "transaction_type": ""
    }
}

You can use Webhook Java library to form the response.

It has provision to accept responses of multiple types, namely :

Find below the definition of all these types of templates :

Templates 

Model Template definitions 

1. quickReplyTextTemplate

Sample :

{
  "button":["Select"],"title":"XXXX 5100"
}

2. imageTemplate

Sample :

{
  "image":"imgs/card.png"
}

3. buttonTemplate

Sample :

{
  "buttons" : [ {
    "title" : "yes"
  } ]
}

4. carouselTemplate

Sample :

[
  {
    "buttons" : [ {
      "title" : "yes"
    }
                ],
    "image" : "https://beebom-redkapmedia.netdna-ssl.com/wp-content/uploads/2016/01/Reverse-Image-Search-Engines-Apps-And-Its-Uses-2016.jpg",
    "title" : "head",
    "subtitle" : "subtitle"
  }
]

5. listTemplate

Sample :

{
  "list": [
    {
      "image": "imgs/card.png",
      "buttons": [
        "button"
      ],
      "subTitle": "XXXX 5100",
      "title": "VISA"
    },
    {
      "image": "imgs/card.png",
      "buttons": [
        "button"
      ],
      "subTitle": "XXXX 5122",
      "title": "VISA"
    },
    {
      "image": "imgs/card.png",
      "buttons": [
        "button"
      ],
      "subTitle": "XXXX 5133",
      "title": "VISA"
    }
  ]
}

6. videoTemplate

Sample :

{
  "video": "https://www.w3schools.com/html/mov_bbb.mp4"
}

7. custom

This type can have user-defined JSON payload to render a user-defined template. The above templates are what is provided to the user by triniti.ai, but the user is free to define his/her own templates/response types and populate them from his/her API.

Other components of response :

As part of Workflow, Webhook responses have some additional fields, namely :

Security

The HTTP request to contain an X-Hub-Signature header which contains the SHA1 signature of the request payload, using the app secret as the key, and prefixed with sha1=. Your webhook endpoint can verify this signature to validate the integrity and origin of the payload.

Please note that the calculation is made on the escaped Unicode version of the payload, with lower case hex digits. For example, the string äöå will be escaped to \u00e4\u00f6\u00e5. The calculation also escapes / to \/, < to \u003C, % to \u0025 and @ to \u0040. If you calculate against the decoded bytes, you will end up with a different signature.

Java sample code available at GitHub.

Manage Fulfillment via Workflows

Workflow Editor

The Workflow Editor as the name suggests is a GUI to create/edit a new/existing workflow.

Workbook

A workbook is where you will design the complete flow for a particular intent. Each workbook can have multiple nodes to define the flow. The figure below shows a new workbook only with a Start node and the toolbar.

<workbook image>

Toolbar

Toolbar exists in the top right corner of the workflow editor, which comprise of

<fulscreen image> Full Screen: To make the editor occupy full screen or go back to the standard window.

<download image> Download: To downloads the AIFlow file. The AIFlow file can be imported for other intent or in the other workspace.

<upload image> Upload: This can be used in case a pre-designed AIFlow file is available and needs to be uploaded to show a workflow in the editor.

<debug image> Debug Workflow: To view the source code of the AIFlow file and JSON workflow. For debugging purpose.

<save image> Save Workflow: To save the workflow.

Node

Node is one step of the process to complete a flow. Each node is attached to one entity. Moreover, every node has three responsibilities.

  1. Ask input from the user
  2. Verify inputs
  3. A route to the next node

Each node except the start node has four buttons on the right-hand side upper corner to open Definition, Validation, Connection and to delete. For the start node, there is no Validation. Tiny “garbage can” icon on every node enables the user to delete a particular node selectively.

<node image>

Definition Tab

This category contains the following keys (as can be seen from the attached fig below) :

Node name: name of the node.
Node description: a brief description of what the node does.
Entities: The entity to be handled by the node.
Prompt: Messages to send to the user to ask for the entities.
See Prompts for more information.
<definition image>

Validation Tab

This category contains the keys participating in the validation of user inputs:

Validation Type: defines the type of validation of the user inputs e.g. regex validation, camel route validation, etc. Login session required: if checked, the key secured in JSON is set to true, meaning that the user will have to be logged in as part of validation for this node.
End flow if Validation Fails: if checked, the flow will end altogether in case the validation of the current node fails.
Error Prompt: this static error text message is displayed if the user fails the validation.
Update Prompt: this static text message is displayed if the user updates the value.
See Handling Validations for more information.
<validation image>

Connection Tab

This section can be configured to have conditional branching to another node and is reflected in the script section of the input definition in JSON.

<connection image>

Defining a Workflow

Workflow helps to define conversation journeys. The intent and entity might be enough information to identify the correct response, or the workflow might ask the user for more input that is needed to respond correctly. For example, if a user asks, What is the status of flight?, defined workflow can ask for the flight number.

In a workflow, each entity is handled by a node. A node will have at least a prompt and a connection. A prompt is to ask for user input and connection to link to another node. In a typical workflow which handles n entities, there would be n+2 nodes (One node per entity with a start and a cancel node). Even though while designing a workflow we expect user inputs in a sequence, but by design, a workflow can handle any entities in any order or all in a single statement. It out of the box supports out of context scenarios or updates and many other features. See Features to check the list of workflow features.

Sequence of Execution

The workflow like any other bot conversation starts with an utterance made by the user, followed by Triniti's intent identification and consequently followed by start/init node of the workflow.

Every node has the jobs of slot-filling (getting the entity value from the user), validating user input and moving on to the next node.

As one might imagine, to fulfill above process, each node has a "prompt" to ask a user for an utterance (to get the entity it is expecting), a "validation" process to validate the entry and finally a "connection" to jump to another node to further the flow.

Hence, the Sequence of implementation involving User and Workflow is:

<sequence image>

Sample Scenario:- 1) User says: I want to book a flight from Delhi to Singapore.

2) Since the workflow has been configured as the fulfillment of the intent identified here (let's say "txn-bookflight"), the init (start) node is called. The connection of this node is executed, and the next linked node is called. (Hence the start node has only "connection")

3) The node connected to the start node gets called by the "connection" part of the start node. Let's call this node X for easier understanding. For this node (and all nodes from hereon) first prompt is executed.

4) Prompt has the responsibility of prompting the user to enter an utterance to resolve and expected entity for the current node (node X).

5) Once the user enters the utterance, validation of the X node is executed to validate the entity value entered by the user as part of the text.

6) On successful validation, the control moves on to connection of node X, where a logical decision is made to know which node to branch on from the current node (X).

7) On successful execution of connection, the underlying framework has now resolved the node to be branched onto. Let's call this node X+1.

Now steps 4 through 7 will be sequentially executed for each node until the user ends the flow or there is some failure in any of the above steps or the flow itself ends successfully.

Find the detailed description of Prompts, Validations and Connections in the following text.

Prompts - User Input

Prompt defines how to ask for the required information or reply to the user. You can define it in the following ways:

Send Message

If you want to respond with a text message, choose Send Message options and add a message in Message Content text box. A message can include workflow variables using curly braces. For example, if you have name as a variable in the workflow context, to use it in the message, your message would look like Hi {name} , how can I help you?

<send message>

Send Template

Sometimes it’s a better user experience to ask or show information using templates. For that you have the option to choose from:

To use one of the templates as prompt, choose Send Template in prompt and click ‘Create Template’ button. That will show you a dialog like in the figure below. See Templates for complete list with details.

<send template>

Call Webhook for Prompt

You can define a webhook to return a dynamic response. Webhook defined will get a request with the event workflow_webhook_pipeline. See Webhooks for more information.

Validations

If you want to validate user input before proceeding to next node, you have option to do that using one of the following:

Validation using Regex

For a typical node which expects a single simple entity like mobile number or email etc, you can use a regex to validate the input. For example, if you expect only Gmail email addresses as an input, you can use define a regex like ^[A-Z0-9._%+-]+@gmail\\.com$. Any valid Java regex will work.

Validation using Groovy Script

You can define validation code in Groovy script. Request and workflow variables are exposed along with the Webhook request object as wRequest (Refer WorkflowRequest class in GitHub). Groovy validation script needs to return status (success/error) with other optional fields like workflow_variables and global_variables as JSON response.

Sample Groovy Script

import groovy.json.JsonSlurper
import java.text.SimpleDateFormat
import java.text.DateFormat
import java.util.Date
import java.text.ParseException

if (sys_date != null) {
    DateFormat df = new SimpleDateFormat("dd-MM-yyyy");
    Date now = new Date()
    try {
        Date date = df.parse(sys_date);
        if (date < now) {
            return new JsonSlurper().parseText(' {"status":"error"}')
        } else {
            return new JsonSlurper().parseText(' {"status":"success", "workflow_variables": {"travel_date": "' + date.format('yyyy-MM-dd') + '"}}')
        }
    } catch (ParseException pe) {
        return new JsonSlurper().parseText(' {"status":"error"}')
    }
} else {
    return new JsonSlurper().parseText(' {"status":"error"}')
}

See Scripting via Groovy for more information.

Validation using JavaScript

Like Groovy script, you can also define validation code in JavaScript. Request and workflow variables are exposed along with the Webhook request object as wRequest (Refer WorkflowRequest class in GitHub). Validation script needs to return status (success/error) with other optional fields like workflow_variables and global_variables as JSON response.

Sample Javascript

function test() {
    if (sys_amount == '7000' && wRequest.bot.languageCode == 'en') {
        return {
            'status': 'success',
            'workflow_variables': {
                'travel_year': '2019'
            }
        };
    } else {
        return {
            'status': 'error'
        };
    }
}
test();

See Scripting via JavaScript for more information.

Response expected from Groovy and JavaScript Validation Scripts:
{
  "messages": [...],
  "render": "<WEBVIEW|BOT>",
  "keyboard_state": "<ALPHA|NUM|NONE|HIDE|PWD>",
  "status": "<success|error|2faPending|2faSuccess|2faFailure|pending|loginPending>",
  "expected_entities": [],
  "workflow_variables": {
    "entity_1": "value_1",
    "entity_2": "value_2"
  },
  "global_variables": {
    "entity_3": "value_3",
    "entity_4": "value_4"
  }
}

Validation using Webhook

For complex cases, you can define a webhook to validate user input. Webhook defined will get a request with the event wf_validation or wf_u_validation. See Webhooks for more information.

Connections

Based on the current input and other previous inputs, you can instruct what to do next. For that, you have the following options to define the connections between the nodes. For example, at node A you asked for user's age and now based on that you want to take a call whether to allow him to book the tickets or not. These kinds of rules you can define in the connections. When even you create a new node from an existing node, an entry is added in the connection of that node. You can define the conditions there, or if there is only a single connected node, it's automatically added as the default next node. To define these connections you have these options:

Connection Builder

Connection Builder is a GUI to define simple routing. It's the best option to go while defining a mockup of the flow or for simple use cases where routing is only based on entities or just a default routing. Whenever you create a new node from any existing node that new node entry is added to the parent node connection builder as default routing. If multiple nodes are added from a node, then you need to define the conditions to route to each node and can keep one as the default fallback node. <connection builder image>

Groovy Script

You can use a Groovy script to define routing. The groovy script needs to return the response in JSON format. For example, as per below code snippet, if text is hello go to the node with id world else to the node with id error.

import groovy.json.JsonSlurper
if (text == 'hello' ) {
    return new JsonSlurper().parseText('{"id":"world", "type": "node"}')
} else {
    return new JsonSlurper().parseText('{"id":"error", "type": "node"}')
}

JavaScript

Similar to Groovy, routing can be achieved using Javascript function like:

function test() {
    if (text == 'hello') {
        return {
            "id": "exit",
            "type": "node"
        }
    } else {
        return {
            "id": "error",
            "type": "node"
        }
    }
}
test();

Connection Webhook

You can define a webhook to define a dynamic routing. Webhook defined will get a request with the event wf_connection. See Webhooks for more information. The property of "webhook" as a tool to call API is useful here in case the connection needs extensive coding or in case the developer wants to exercise language discretion.

Scripting via JavaScript

<connection javascript image>

Scripting via Groovy

<connection groovy script image>

Templates GUI

Button payload data to be in JSON format with data and intent. Data could be any JSON object.

For example:

{
    "data": {
        "flight-no": "AI381"
    },
    "intent": "txn-bookflight"
}

Features

  1. Workflow Cancellation
  2. Amend inputs
  3. Workflow Context
  4. Global Context
  5. Handling multiple inputs in a single statement
  6. Visualize flow
  7. Partial state save (Coming Soon)
  8. In step login

Debugging Workflows

~Comming Soon~

Workflow FAQs

Q. What can I do if I need to add a static prompt as well as a dynamic template.

A. You can use a combination of Send Message and Call Webhook to show a static text message followed by a dynamic template (or even text) sent from the API implementation. You can also use Call Webhook to do any number of text-template combinations.

Q. When is the text in update and error prompts in the "validation" tab of a node displayed?

A. Error prompt tells the user if for a particular node validates the entity entered by the user as incorrect. Similarly, update prompt is to let the user node if an older entity (the node that has already been executed) gets updated. Update Prompt gives the update message of the entity (node) that has been updated and not of the current node.

Q. What is the postback? What is the format to define that?

A. Postback is the request body which the server gets once the user clicks on any postback type button or quick reply. Morfeus expects it to be of JSON type with either 'intent' or 'type' and 'data'. For example this is a valid postback:

{
    "data": {
        "flight-no": "AI381"
    },
    "intent": "txn-bookflight"
}

Q. How do I point a button to an FAQ?

A. You can create a postback type button with following postback data:

{
    "data": {
        "FAQ": "<<ANY TRAINED FAQ UTTERANCE>>"
    },
    "type": "MORE_FAQ"
}

Manage Channels

Managing Web

How to embed webSDK into website?

Import the reference to the sdk.js from morfeuswebsdk public site in your hosted main html page where you intend the chat bot to render.

<script type="text/javascript" src="https://ai.yourcompany.com/morfeuswebsdk/libs/websdk/sdk.js" id="webSdk"></script>

Code Snippet?

Invoke the SDK by adding the code snippet

(function() {
  var customerId = "";
  var appSessionToken = "";
  var initAndShow = "1";
  var showInWebview="0";
  var endpointUrl = https://ai.yourcompany.com/morfeus/v1/channels"; 
  var desktop = {
      "chatWindowHeight": "90%",
      "chatWindowWidth": "25%",gs
      "chatWindowRight": "20px",
      "chatWindowBottom": "20px",
      "webviewSize": "full"
  };
  var initParam = {
        "customerId": customerId,        
        "desktop": desktop,            // screen Size of the chatbot for desktop version
        "initAndShow": initAndShow,     // maximized or closed state .
        "showInWebview": showInWebview,
        "endpointUrl": endpointUrl,
        "botId": "XXXXXXXXXXXXX",             // unique botId Instance specific
        "domain": "https://ai.yourcompany.com",    // hosted domain address for websdk
        "botName": "default",
        "apiKey": "1234567",
        "dualMode": "0",
        "debugMode": "0",
        "timeout": 1000 * 60 * 15,
        "idleTimeout": 1000  * 60 * 15,
        "quickTags" : {
          "enable" : true,
          "overrideDefaultTags" : true,
          "tags" : ["recharge","balance", "transfer", "pay bills"]
        }
    }
 window.options = initParam;
 websdk.initialize(initParam);
})();
options descriptions default
botName The name of the bot
desktop The chat dimension configuration for desktop browsers refer to the above sample code
endpointUrl The bot API URL “"
customerId The unique id for handling session “"
domain The domain part of the url of the host webapp with the protocol “"
destroyOnClose removes the websdk instance when the close button is clicked FALSE
initAndShow open the chat window when sdk is initiated 1
version websdk version used 1.3.11

Integration on morefeuwebsdk

<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width, initial-scale=1.0">
        <meta http-equiv="X-UA-Compatible" content="ie=edge">
        <title>Chatbot</title>
        <style>
            #chatbot {
                bottom: 0;
                right: 0;
                width: 435px !important;
                height: 100%;
                position: absolute;
            }
        </style>
    </head>

    <body>
        <iframe id="chatbot" frameborder="0" scrolling="no" allow="microphone" src="https://ai.yourcompany.com/morfeus/"></iframe>
    </body>
</html>

Customizing Web

List Of Customisable Features:

Feature Customisable How
Common Templates Yes Common Templates can be customised by writing css in vendor-theme-chatbox.css and vendor-theme-chatbox.css.
Customer Specific Templates Yes Common Templates can be customised by writing css in vendor-theme-chatbox.css and vendor-theme-chatbox.css.
Feature Customisable How
Icons Yes Icons can be customised by replacing icons references in index.css and common templates.
Feature Customisable How
Minimize Button Yes The minimise button can be customised by changing it's image reference in desktopHeaderTemplate.html and mobileHeaderTemplate.html in common templates and it's style values in vendor-theme-chatbox.css.
Close Button Yes The close button can be customised by changing it's image reference in desktopHeaderTemplate.html and mobileHeaderTemplate.html in common templates and it's style values in vendor-theme-chatbox.css.
Maximize Button Yes This feature can be introduced in the websdk by providing a flag in index.js of the project.And can be customised by making changes in minimizedStateTemplate.html in common templates and style values in vendor-theme-chatbox.css.

The flag that needs to be introduced in initParams of index.js of project is json "minimizedState : false".

Feature Customisable How
Splash Screen Yes Modify the splashScreenTemplate.html available in the common templates. Customize this template according to the requirement by adding style(css) in vendor-theme-chatbox.css.
Feature Customisable How
Popular Query - This feature displays a menu at the footer of the chatbot showing popular querieswhich could made by the user to the bot. Yes Can be customised by changing the display type of submenu in the payload coming in the init call.
Feature Customisable How
Prelogin No Default feature. No customisation needed.
Postlogin Yes Postlogin case is a scenario where the user is already logged in the parent app(bank's application). So morfeus-controller have to pass X-CSRF-TOKEN in init request to ensure it's a postlogin case.
Feature Customisable How
Analytics - This Feature about adding analytics management to the websdk container. Yes This feature can be enabled/disabled in websdk by adding/removing analytics flag object in initParams config object of index.js of the project. Below is the config structure to be added Type of the values to be added in the above object
"analytics" : {
    "enabled" : true/false,
    "crossDomains" : [domain 1, domain 2, ...domain n],
    "ids" : {
        "analyticsServiceProviderName" : "apiKey"
    }
}
Feature Customisable How
DestroyOnClose - This Feature about destroying the instance of the bot completely from the parent page containing the chatbot Yes This feature can be enabled/disabled in websdk by adding/removing destroyOnClose flag in initParams config object of index.js of the project. Below is the flag to be added.
destroyOnClose : true/false
Feature Customisable How
Quick Tags - This Feature used to add quick replies list in the chatbot. This usually comes in the bottom section of the chatbot. Yes This feature can be enabled/disabled in websdk by adding/removing quickTags flag object in initParams config object of index.js of the project. This feature can further be customised by changing the payload coming in the init network call after the bot is rendered inorder to load quick tag options dynamically from the server. Also if the overrideDefaultTags is set to false, The quickTags can be modified from server response quick_replies, and if the response from the server is empty, the quickTags will loaded from index.js Below is the flag object to be added.
"quickTags" : {
    "enabled" : true/false,
    "overrideDefaultTags" : true/false,
    "tags" : ["tag1", "tag2"...."tagn"]
}
Feature Customisable How
Start Over Chat - This Feature used reinitiate the chatbot by clearing all the current session chat messages if required and current user login session as well Yes This feature can be enabled/disabled in websdk by adding/removing startOverChat flag object in initParams config object of index.js of the project. Below is the flag object to be added.
"startOverConfig" : {
    "clearMessage" : true/false
}
Feature Customisable How
Idle timeout - This feature logs out the user after a certain period of time if the user is logged in and is inactive for a certain period of time Yes The idle timeout in websdk can be changed by changing the value of idletimeout flag in initParams config object of index.js of the project. Below is the flag to be added.
"idletimeout" : 'idleTimeoutValue'
Feature Customisable How
Bot Dimension - This Feature used to specify the dimensions of the chatbot in parent container. Yes The dimension and position of chatBox window inside the parent container window can be managed by providing desktop flag object in the initParam config object. Below is the flag to be added.
"desktop" : {
    "chatWidth" : "value in %",
    "chatHeight" : "value in %",
    "chatRight" : "value in px",
    "chatBottom" : "value in px"
}
Feature Customisable How
Upload Image - This Feature used to upload user's image in the chatbox. Yes (only style) The is only customisable in look and feel wise by making changes in chatBoxTemplate.html's editImage modal in common templates.
Feature Customisable How
Teach Icon Yes The icon can be changed by replacing with the desired image in images folder of the project and replacing with appropriate image and making changes in the feedbackTemplate.html of common templates
Feedback Icons Yes The icon can be changed by replacing with the desired images in images folder of the project and replacing with appropriate images and making changes in the feedbackTemplate.html of common templates
Feature Customisable How
Date Format - This Feature about keeping a particular date format when chatbot is showing last login date to the user Yes It can be changed by keeping a flag lastLoginDateFormat in initParams object of the index.js file of projects. Below is the flag to be added.
lastLoginDateFormat : "date format in string"
Feature Customisable How
Emoji - Supporting emojis in websdk as a feature Yes 1. This feature can be enabled/disabled in websdk by adding/removing a flag called emojiEnabled in initParams config object of index.js of project.

2. And the different emojis supported in websdk can be mentioned in the emoji array flag where each element contains an array of emoji's. This also has to be added in initConfig params object of index.js of project. The mark up and style values and of the emoji box can be changed by customising emojiTemplate.html in common templates and vendor-theme.chatbox.css respectively.

For Point 1

Below is the flag to be added:

enableEmoji : true/false

For Point 2

Below is the flag to be added:

emoji : [
    ["emoji code1","emoji code2","emoji code3" ....."emoji code n"],

    ["emoji code1","emoji code2","emoji code3" ....."emoji code n"],

    .

    .

    ["emoji code1","emoji code2","emoji code3" ....."emoji code n"]
]
Feature Customisable How
SSL Pinning - This Feature implemented on the mobile implementation of websdk. It is a security check for certificates on network calls Yes This feature can be enabled/disabled in websdk by adding/removing a flag called sslE nabled in initParams config object of index.js of project. Below is the flag to be added -
sslEnabled : true/false
Feature Customisable How
Server Sent Event Feature - This Feature implemented when the chatbot is unable to continue chat with user further and chat is transferred to a human being No
Feature Customisable How
Push Notification - This Feature used to send push notification updates to the user Yes This feature can be enabled/disabled in websdk by adding/removing a flag called pushConfig object in INIT_DATA object through admin panel of morfeuswebsdk.
Feature Customisable How
AutoSuggest - This Feature used to present suggestions to user whenever user is typing a query in the chatbox. Yes This feature can be enabled/disabled in websdk by adding/removing a flag called autoSuggestConfig object in INIT_DATA object through admin panel of morfeuswebsdk. Below is the sample config to be added.
"autoSuggestConfig" : {
    "enableSearchRequest" : true/false,
    "enabled" : true/false,
    " noOfLetters" : number
}
Feature Customisable How
VoiceSdk - This Feature used for mic support in hybrid sdk for mobile based platforms Yes This feature can be enabled/disabled in websdk by adding/removing a flag called voiceSdkConfig object in INIT_DATA object through admin panel of morfeuswebsdk. Below is the sample payload config to be added
"voiceSdkConfig" : {
    "enableVoiceSdkHint" : true/false,
    "maxVoiceSdkHint" : number of hints,
    "speechConfidenceThreshold" : threshold value

}
Feature Customisable How
Custom Webview Header - Used to insert customised Webview Header Template. Yes This feature has template dependency which can be customised by changed webviewHeaderTemplate.html from common templates and css style from vendor-theme-chatbox.css. This feature can be enabled/disabled in websdk by adding/removing a flag called customWebviewHeader in initParams config object of index.js of project. Below is the flag object to be added in initParams Config Object-
"customWebviewHeader" : {
    "enable" : true/false
}
Feature Customisable How
Show Postback Utterance - This Feature used trigger external message in websdk from parent container. Yes This feature can be enabled/disabled in websdk by adding/removing a flag called showPostbackUtterance in initParams config object of index.js of project.

Below is the flag to be added in initParams Config Object-

"showPostbackUtterance" : true/false

Type of value to be added in the above flag

showPostbackUtterance : Boolean

Feature Customisable How
Custom Errors - This Feature used to handle various network response error scenarios in websdk. Yes This feature can be enabled/disabled in websdk by adding/removing a flag called customErrors in initParams config object of index.js of project. For various HTTP error statuses, project can define different messages in a function called handleErrorResponse of preflight.js file. Below is the flag to be added in initParams Config Object.
"customErrors" : true/false
Feature Customisable How
Hide On Response - This is Feature used to minimise chatbot when user is having conversation with the bot depending upon the template type coming from the network response Yes This feature can be enabled/disabled in websdk by adding removing a flag called hideOnResponseTemplate in initParams config object of index.js of project. Below is the flag to be added in initParams Config Object.
"hideOnResponseTemplates" : ["templateName1",
"templateName2",...."templateName n"]
Feature Customisable How
Show Close Button On Postlogin - This Feature used to remove close button in post login scenario. Yes This feature can be enabled/disabled in websdk by adding/removing a flag called showCrossOnPostLogin in initParams config object of index.js of project. Below is the flag to be added in initParams Config Object.
"showCrossOnPostLogin" : true/false
Feature Customisable How
Location Blocked Message - This is used to display custom location blocked message if user has denied allow location pop up in browser. Yes This feature can be enabled/disabled in websdk by adding/removing a flag called locationBlockedMsg in initParams config object of index.js of project. Below is the flag object to be added in initParams Config Object.
"locationBlockedMsg" : true/false
Feature Customisable How
Slim Scroll - This Feature used to enable slim scroll bar for Internet Explorer browser Yes This feature can be enabled/disabled in websdk by adding/removing a flag called enableSlimScroll in initParams config object of index.js of project. Below is the flag object to be added in initParams Config Object.
"enableSlimScroll" : true/false
Feature Customisable How
Unlinkify Email - This Feature used to remove url nature of any emails text coming in websdk cards Yes This feature can be enabled/disabled in websdk by adding/removing a flag called unLinkifyEmail in initParams config object of index.js of project. Below is the flag object to be added in initParams Config Object.
"unLinkifyEmail" : true/false
Feature Customisable How
Focus On Query - Focus on input box if the last message in websdk is a text Yes This feature can be enabled/disabled in websdk by adding/removing a flag

called focusOnQuery in initParams config object of index.js of project. Below is the flag object to be added in initParams Config Object. json "focusOnQuery" : true/false

Feature Customisable How
Custom Negative Feedback - Load an dynamic feedback template from network call rather than default feedback modal Yes m This feature can be enabled/disabled in websdk by adding/removing a flag called customNegativeFeedback in initParams config object of index.js of project. Below is the flag object to be added in initParams Config Object.
"customNegativeFeedback" : true/false
Feature Customisable How
If payload type is RELATED_FAQ and messageType in network request body needed is postback Yes This feature can be enabled/disabled in websdk by adding/removing a flag called postbackOnRelatedFaq in initParams config object of index.js of project. Below is the flag object to be added in initParams Config Object.
    "postbackOnRelatedFaq" : true/false

Managing Android

Overview

Android SDK provides a lightweight conversational / messaging UX interface for users to interact to the Triniti Platform. The SDK enables rich conversation components to be embedded in existing Android Mobile Apps.

Pre Requisites

Install SDK

To install SDK add the following configuration to your project level build.gradle file.

allprojects {
   repositories {
       maven {
           url "https://artifacts.active.ai/artifactory/android-sdk-release"
       }
   }
}

And add below configuration to your module level build.gradle file.

dependencies {
    // MFSDK dependencies
    implementation 'com.morfeus.android:MFSDKHybridKit:1.3.41'
    implementation 'com.morfeus.android:MFOkHttp:3.12.0'
    implementation 'com.google.guava:guava:22.0-android'
    implementation 'com.android.support:design:28.0.0'
    implementation 'com.android.support:appcompat-v7:28.0.0'

    // Voice feature dependencies
    implementation 'com.morfeus.android.voice:MFSDKVoice:1.1.6'
    implementation('com.google.auth:google-auth-library-oauth2-http:0.7.0') {
        exclude module: 'httpclient'
    }

    implementation 'io.grpc:grpc-okhttp:1.13.1'
    implementation 'io.grpc:grpc-protobuf-lite:1.13.1'
    implementation 'io.grpc:grpc-stub:1.13.1'
    implementation 'io.grpc:grpc-android:1.13.1'
    implementation 'javax.annotation:javax.annotation-api:1.2'
}

Note:If you get 64k method limit exception during compile time then add following code into your app-level build.gradle file.

android {
    defaultConfig {
        multiDexEnabled true
    }
}
dependencies {
    implementation 'com.android.support:multidex:1.0.1'
}

Initialize the SDK

To initialize Morfeus SDK you need workspace Id. Workspace Id you can get through nevigating to Channels > Android click on settings icon. alt_text

Add following lines to your Activity/Application where you want to initialize the Morfeus SDK.onCreate()of Application class is best place to initialize. If you have already initialized MFSDK, reinitializing MFSDK will throw MFSDKInitializationException.

MFSDKProperties sdkProperties = new MFSDKProperties.Builder(BuildConfig.BOT_URL)
                .setWorkspaceId(BuildConfig.WORKSPACE_ID)
                .setSpeechAPIKey(BuildConfig.SPEECH_API_KEY)
                .build();

try {
    sMFSdk = new MFSDKMessagingManagerKit.Builder(this)
            .setSdkProperties(sdkProperties)
            .build();
    sMFSdk.initWithProperties();
} catch (MFSDKInitializationException e) {
    Log.e("MFSDK", "Failed to initialise sdk");
}

Invoke Chat Screen

To invoke chat screen call showScreen() method of MFSDKMessagingManager. Here, sMSDK is an instance variable of MFSDKMessagingManagerKit.

// Open chat screen
MFSDKSessionProperties sdkProperties = new MFSDKSessionProperties.Builder().build();
sMFSDK.showScreen(activityContext, sdkProperties);

You can get instance of MFSDKMessagingManagerKit by calling getInstance()of MFSDKMessagingManagerKit. Please make sure before calling getInstance() you have initialized the MFSDK. Please check following code snippet.

try {
    // Get SDK instance
    MFSDKMessagingManager mfsdk = MFSDKMessagingManagerKit.getInstance();
} catch (Exception e) {
// Throws exception if MFSDK not initialised.
}

Compile and Run

Once above code is added you can build and run your application. On launch of chat screen, welcome message will be displayed. alt_text

Enable voice chat

If you haven't added required dependencies for voice than please add following dependencies in your app/build.gradle

dependencies {
    // Voice feature dependencies
    implementation 'com.morfeus.android.voice:MFSDKVoice:1.1.6'
    implementation('com.google.auth:google-auth-library-oauth2-http:0.7.0') {
        exclude module: 'httpclient'
    }

    implementation 'io.grpc:grpc-okhttp:1.13.1'
    implementation 'io.grpc:grpc-protobuf-lite:1.13.1'
    implementation 'io.grpc:grpc-stub:1.13.1'
    implementation 'io.grpc:grpc-android:1.13.1'
    implementation 'javax.annotation:javax.annotation-api:1.2'
}

Call setSpeechAPIKey(String apiKey) method ofMFSDKProperties builder to pass speech API key.

try {
// Set speech API key
MFSDKProperties properties = new MFSDKPropertie.Builder(WORKSPACE_ID)
        ...
        .setSpeechAPIKey("YourSpeechAPIKey")
        ...
        .build();

} catch (MFSDKInitializationException e) {
    Log.e("MFSDK", e.getMessage());
}

Set Speech-To-Text language

In MFSDKHybridKit, English(India) is the default language set for Speech-To-Text. You can change STT language by passing valid language code using setSpeechToTextLanguage(Language.STT.LANG_CODE)method ofMFSDKSessoionProperties.Builder.

MFSDKSessionProperties sessionProperties = new MFSDKSessionProperties.Builder()
    .setSpeechToTextLanguage(Language.STT.ENGLISH_INDIA)
    .build();

Set Text-To-Speech language

English(India) is the default language set for Text-To-Speech. You can change TTS language by passing valid language code using setTextToSpeechLanguage(Language.STT.LANG_CODE) method of MFSDKSessoionProperties.Builder.

MFSDKSessionProperties sessionProperties = new MFSDKSessionProperties.Builder()
    .setTextToSpeechLanguage(Language.TTS.ENGLISH_INDIA)
    .build();

Provide Speech Suggestions

You can provide additional contextual information for processing user speech. To provide speech suggestions add list of words and phrases into MFSpeechSuggestion.json file and place it under assets folder of your project. You can add maximum 150 phrases intoMFSpeechSuggestion.json. To see sample MFSpeechSuggestion.json, please download it from here.

Security

Enable SSL Pinning

To enable ssl pinning set enableSSL(boolean enable, String[] pins) to true and pass set of set of certificate public key hash(SubjectPublicKeyInfo of the X.509 certificate).

MFSDKProperties sdkProperties = new MFSDKProperties
       .Builder(botURL)
       ...       
       .enableSSL(true, new String[]{"sha256/TnsUfcou7yksrrCwJH/NHd1fOeLup8gzfeHUyg+x+pk="}) 
       ... 
       .build();

Enable Root Detection

To prevent chat usage on rooted device set enableRootedDeviceCheck() to true.

MFSDKProperties sdkProperties = new MFSDKProperties.Builder(botURL)
    ...  
    .enableRootedDeviceCheck(true)
    ...
    .build();

Prevent user from taking screenshot

To prevent user or other third application from taking chat screen screenshot set .disableScreenShot(true).

MFSDKProperties sdkProperties = new MFSDKProperties.Builder(botURL)
    ...  
    .disableScreenShot(true)
    ...
    .build();

Enable APK Tampering Detection

Enable tamper-detection to prevent illegitimate apk from executing. Set checkAPKTampering(true, certificateDigest) to true and pass your sha256 digest of apk signing certificate.

MFSDKProperties sdkProperties = new MFSDKProperties.Builder(botURL)
    ...  
    .checkAPKTampering(true, "ApkSignningCeritifcateDigest")
    ...
    .build();

Managing IOS

Overview

iOS SDK provides a lightweight conversational / messaging UX interface for users to interact to the Triniti Platform. The SDK enables rich conversation components to be embedded in existing iOS Mobile Apps.

Prerequisites

Install and configure dependencies

1. Install Cocoapods

CocoaPods is a dependency manager for Objective-C, which automates and simplifies the process of using 3rd-party libraries in your projects. CocoaPods is distributed as a ruby gem, and is installed by running the following commands in Terminal App:

$ sudo gem install cocoapods
$ pod setup

2. Update .netrc file

The Morfeus iOS SDK are stored in a secured artifactory. Cocoapods handles the process of linking these frameworks with the target application. When artifactory requests for authentication information when installing MFSDKWebKit, cocoapods reads credential information from the file.netrc, located in ~/ directory.

The .netrc file format is as explained: we specify machine(artifactory) name, followed by login, followed by password, each in separate lines. There is exactly one space after the keywords machine, login, password.

machine <NETRC_MACHINE_NAME>
login <NETRC_LOGIN>
password <NETRC_PASSWORD>

One example of .netrc file structure with sample credentials is as below. Please check with the development team for the actual credentials to use.

drawing

Steps to create or update .netrc file

  1. Start up Terminal in mac
  2. Type "cd ~/" to go to your home folder
  3. Type "touch .netrc", this creates a new file, If a file with name .netrc not found.
  4. Type "open -a TextEdit .netrc", this opens .netrc file in TextEdit
  5. Append the machine name and credentials shared by development team in above format, if it does not exist already.
  6. Save and exit TextEdit

3. Install the pod

To integrate 'MFSDKHybridKit' into your Xcode project, specify the below code in your Podfile

source 'https://github.com/CocoaPods/Specs.git'
#Voice support is available from iOS 8.0 and above
platform :ios, '7.1'

target 'TargetAppName' do
pod '<COCOAPOD_NAME>'
end

Once added above code, run install command in your project directory, where your "podfile" is located.

$ pod install

If you get an error like "Unable to find a specification for ", then run below command to update your specs to latest version.

$ pod repo update

When you want to update your pods to latest version then run below command.

$ pod update

Note: If we get "401 Unauthorized" error, then please verify your .netrc file and the associated credentials.

4. Disable bitcode

Select target open "Build Settings" tab and set "Enable Bitcode" to "No".

alt_text

5. Give permission

Search for ".plist" file in the supporting files folder in your Xcode project. Update NSAppTransportSecurity to describe your app's intended HTTP connection behavior. Please refer apple documentation and choose the best configuration for your app. Below is one sample configuration.

<key>NSAppTransportSecurity</key>
<dict>
<key>NSAllowsArbitraryLoads</key>
<true/>
</dict>

6. Invoke the SDK

To invoke chat screen, create MFSDKProperties, MFSDKSessionProperties and then call the method showScreenWithBotID:fromViewController:withSessionProperties to present the chat screen. Please find below code sample.

// Add this to the .h of your file
#import "ViewController.h"
#import <MFSDKMessagingKit/MFSDKMessagingKit.h>
@interface ViewController ()<MFSDKMessagingDelegate>
@end

// Add this to the .m of your file
@implementation ViewController
// Once the button is clicked, show the message screen -(IBAction)startChat:(id)sender
{
MFSDKProperties *params = [[MFSDKProperties alloc] initWithDomain:@"<END_POINT_URL>"];
params.messagingDelegate = self;
[[MFSDKMessagingManager sharedInstance] initWithProperties:params];

MFSDKSessionProperties *sessionProperties = [[MFSDKSessionProperties alloc]init];
sessionProperties.userInfo = [[NSDictionary alloc] initWithObjectsAndKeys:@"KEY",@"VALUE", nil];
[[MFSDKMessagingManager sharedInstance] showScreenWithWorkSpaceId:@"Workspace" fromViewController:self withSessionProperties:sessionProperties];

}

@end

Properties:

Property Description
Workspace_Id The unique ID for the bot
END_POINT_URL The bot API URL

Above properties you can get through nevigating to Channels > iOS click on settings icon.

alt_text

Compile and Run

Once above code is added we can build and run. On launch of chat screen, welcome message will be displayed.

alt_text

Providing User/Session Information

You can pass Speech API key if the SDK uses voice recognition feature.

 MFSDKSessionProperties *sessionProperties = [[MFSDKSessionProperties alloc]init];
 sessionProperties.speechAPIKey = @"<Speech_Key>";

For SSL Secure

You can pass Hash keys of SSL certificates

[params enableSSL:YES sslPins:@[@"Hash_Key1", @"Hash_Key2"]];

For Security from Jailbrroken iPhones

You can set YES bool value for following method

[params enableCheck:YES];

Enable voice chat

Provide Speech API Key

MFSDKWebKit supports text to speech and speech to text feature. The minimum iOS deployment target for voice feature is iOS 8.0. The pod file also needs to be updated with the minimum deployment target for voice feature. Speech API key can be passed using speechAPIKey in MFSDKSessionProperties as below.

   MFSDKSessionProperties *sessionProperties = [[MFSDKSessionProperties alloc]init];
...
   sessionProperties.speechAPIKey = @"<YOUR_SPEECH_API_KEY>";
...

Search for ".plist" file in the supporting files folder in your Xcode project. Add needed capabilities like below and appropriate description.

<key>NSSpeechRecognitionUsageDescription</key>
<string>SPECIFY_REASON_FOR_USING_SPEECH_RECOGNITION</string>

<key>NSMicrophoneUsageDescription</key>
<string>SPECIFY_REASON_FOR_USING_MICROPHONE</string>

Set Speech-To-Text language

English(India) is the default language set for Speech-To-Text. You can change STT language by passing valid language code using speechToTextLanguage property of MFSDKSessionProperties. You can find list of supported language code here.

MFSDKSessionProperties *sessionProperties = [[MFSDKSessionProperties alloc]init];
   sessionProperties.shouldSupportMultiLanguage = YES;
   sessionProperties.speechToTextLanguage = @"en-IN";

Set Text-To-Speech language

English(India) is the default language set for Text-To-Speech. You can change STT language by passing valid language code using textToSpeechLanguage property of MFSDKSessionProperties.Please set language code as per apple guidelines.

MFSDKSessionProperties *sessionProperties = [[MFSDKSessionProperties alloc]init];
   sessionProperties.shouldSupportMultiLanguage = YES;
   sessionProperties.textToSpeechLanguage = @"en-IN";

Provide Speech Suggestions

You can provide additional contextual information for processing user speech. To provide speech suggestions add list of words and phrases into MFSpeechSuggestion.json file and add it to main bundle of your target. You can add maximum 150 phrases intoMFSpeechSuggestion.json. To see sample MFSpeechSuggestion.json, please download it from here.

Managing Facebook

Overview

This page documents how to set up a Facebook channel in Triniti.ai.

Enable Facebook Channel

To enable the Facebook channel, go to the "Channels" section and toggle the Facebook button.

alt_text

Before you proceed, you will need to have a Facebook Page, Facebook Developer Account and a Facebook App. For that, you can follow Facebook Official Document to Setup Facebook App. While setting up the Facebook App, you will need a webhook URL and a verification token. For that use the Callback URL and Verify Token you get after enabling Facebook channel in triniti.ai.

Configure Page Access Token And Secret Key

Triniti requires the page access token for sending messages from the bot. Also, Secret Key is required to verify the requests are coming from Facebook, not from any unauthorized identity.

While setting up the Facebook App, you must have created a Page Access Token. You can find that in the 'Access Tokens' section of the Messenger settings console. Copy that to the 'Page Access Token' field.

alt_text

Facebook Messenger Settings

In the Basic Setting of the Facebook App, you will find the App Secret, copy that value to the 'Secret Key' field in triniti.ai.

alt_text

Facebook App Basic Settings

Test your Messenger bot

Congratulations, you have finished setting up your Facebook bot using triniti.ai. Your Facebook bot should start responding with the trained queries if everything is correctly configured. To test that your app set up was successful, send a message to your Page from facebook.com or in Messenger.

You can also add test users for your bot. Go to your "Roles" tab from the Facebook dashboard and then click on "Add testers". You can now add anyone who is your Facebook friend. For users who aren't but are interested in testing your bot, you can add them using their fbid (numeric id) or their username. A Facebook username is NOT the display name. You can get it from the URL of the person's profile page.

Managing Line

Overview

This page documents how to setup Line channel in Triniti.ai.

Setup a new Line bot in Line developer portal

Create a new Line Bot. This is used as an identity of our bot — for our users, chatting with bot looks exactly like chatting with the fellow Line contacts.

Create a new Line bot

Visit Line Developer portal and log in using your Line account credentials if you already have, otherwise install Line in your phone, create an account and log in to the portal.

Click on “Add new provider” from the providers list.

alt_text

Enter the company/ enterprise name, click “Confirm”.

alt_text

Now tap "Create" to commit the creation.

alt_text

On the provider screen, hit “Messaging API”.

alt_text

Complete the form along with the developer plan providing the name, description and App icon of your bot. Hit “Confirm”.

Open a new tab and login to triniti.ai admin dashboard and navigate to Line Channel configuration.

Switch back to the previous tab and copy the Channel Secret key from the “Basic Information” segment and paste it into “Channel Secret key” field in the Line channel settings form.

alt_text

Similarly, copy the Channel Access token from “Messaging settings” and paste it into “Auth Token” field of Line channel settings.

alt_text

Copy your application's webhook URL from the channel settings page and paste into the “Webhook URL” field under “Messaging settings” after enabling web hooks.

alt_text

Disable auto-reply and greeting messages under Line features.

Save the QR code of your bot.

alt_text

Congratulations! Your bot has been successfully created.

Add bot in your Line app

Go to "Play Store" on your Android phone or "App Store" in your iPhone and install “Line Messenger" app.

Open the app, follow the required steps and create a Line account.

Navigate to the profile page in your Line app, tap QR code and scan the saved QR code to add the bot as a friend.

Congratulations! Your bot has been successfully added as a friend.

Interact with Bot

Open the Line app from your phone, search for your bot by its botname.

Select the bot and start chatting.

Managing Telegram

Overview

This page documents how to setup Telegram channel in Triniti.ai.

Create a new Telegram bot

Go to "Play Store" on your Android phone or "App Store" in your iPhone and install "Telegram" app.

Open the app, follow the required steps and create a telegram account.

Go to the telegram search bar on your phone and search for the “botfather” telegram bot (he’s the one that’ll assist you with creating and managing your bot).

Type /help to see all possible commands the botfather can handle.

alt_text

Click on or type /newbot to create a new bot.

Follow the instructions and set up "Display name" and "Username" for your bot. Note: The "Display name" and "Username" must end with case-insensitive "bot". For example: "active_telegram_bot", "trinitiBot".

alt_text

You should see a new API token generated for it e.g. 270485614:AAHfiqksKZ8WmR2zSjiQ7_v4TMAKdiHm9T0

Note this token down and keep if safe.

Congratulations! You have created your telegram bot.

Link bot to Microsoft Bot Framework

Log in to Microsoft Bot Framework

Navigate to "My Bots" and click "Create Bot"

alt_text

You will be redirected to Azure Portal. Log in to the portal.

Select Bot Channel Registration and provide all the necessary details.

alt_text

Create a new App ID in App Registration Portal.

alt_text

alt_text

Click "Generate an app password to continue".

alt_text

Note the password generated and keep it safe.

alt_text

alt_text

Navigate back to the Azure portal and provide the newly generated App ID and password.

Save the configuration.

Tap "Channels" under Bot Management.

alt_text

Click on "Telegram" channel.

alt_text

Paste the API token generated by BotFather in Telegram app into Access Token under "Configure Telegram" and Save the configuration.

alt_text

Go to "Settings" under Bot Management.

Copy your application's webhook URL from the channel settings page and paste into "Messaging Endpoint" under Configuration.

alt_text

Congratulations! Bot setup is now complete!

Add bot in your Telegram app

Search for your newly created bot on Telegram from the search bar by typing: "@Username".

Select the bot and start chatting.

Managing Skype

Overview

This page documents how to setup Skype channel in Triniti.ai.

Create a new Skype bot

Log in to Microsoft Bot Framework

Navigate to "My Bots" and click "Create Bot"

alt_text

You will be redirected to Azure Portal. Log in to the portal.

Select Bot Channel Registration and provide all the necessary details.

alt_text

Create a new App ID in App Registration Portal.

alt_text

alt_text

Click "Generate an app password to continue".

alt_text

Note the password generated and keep it safe.

alt_text

alt_text

Navigate back to the Azure portal and provide newly generated App ID and password.

Save the configuration.

Tap "Channels" under Bot Management.

alt_text

Click on "Skype" channel.

alt_text

Fill up the information about the bot in the channel configuration and save.

alt_text

Open a new tab and login to triniti.ai admin dashboard and navigate to Skype Channel configuration.

Paste the App ID generated earlier in App Registration Portal into “Auth Token” field in the Skype channel settings form.

Paste the secret key generated earlier in App Registration Portal into “Channel Secret key” field in the Skype channel settings form.

Go to "Settings" under Bot Management.

Copy your application's webhook URL from the channel settings page and paste into "Messaging Endpoint" under Configuration.

alt_text

Navigate to "Channels" under "Bot Management" and click on Skype channel icon listed under "Connect to Channels".

alt_text

This will redirect you to the bot invitation URL e.g. https://join.skype.com/bot/9f810913-8aa3-4f7b-aea1-0700fc897359

alt_text

Save this URL. It will be needed for users to add the bot as their Skype contact.

Congratulations! Bot setup is now complete!

Add bot in Skype

Simply visit the invitation URL saved above.

Log in to skype and click on "Add to Contacts".

The bot is now ready to be interacted with!

Train, Deploy & Publish your Workspace

Training your Workspace

Starting your Workspace

Publishing your Workspace

Manage Self Learning

Analysing the Report

Incorporating Intent Utterances

Incorporating Dialog Utterances

Incorporating SmallTalk Utterances

Incorporating FAQ Utterances

Manage Customers & Support

Analysing the Report

Customer Profile

Customer Conversation History

Manage Metrics

Manage Billing & Subscription

Setting up Payment Information

Invoice payment can be made through credit card or EFT( Electronic Funds Transfer)

alt_text

We accept Master, Visa and Amex type of credit cards.

alt_text

Workspace payment mapping. Each workpsace can be configured to used different credit cards.

alt_text

Charges

Charges can be configured on either per API or Slab wise. Charges can be configured differently for each plan.

Per API

Based on the total api usage , workspace charges will be calculated.

Slab

Managing Plans

We have the following paid plans whose charges will be differ.

Free Basic Premium

User can upgrade from FREE -> BASIC, BASIC -> PREMIUM but can't downgrade

alt_text

Managing Invoices

The Invoice will be generated on 1st of every month for the previous month workspace usage. Be default each workspace allocated a payment method (Primary credit card) and can be changed. Different invoices generated based on different payment methods.

For instance, there are three workspaces (A, B & C). Workspace A & B allocated a payment method say Visa Credit Card and workspace C allocated a payment method say Master Card. There will be two invoices one with Visa Credit Card - includes charges of workspaces A & B and another Master Credit Card for workspace C.

Respective credti card will be charged with the calculated invoice amount who details are in the invoice PDF.

If the payment fails for a credit card, there will be 3 re-try attempts. Even 2rd attempt fails the workpsaces belongs to that invoice gets BLOCKED. Customer can't use those workspcase. Customer needs to make sure the payment arrangement done for the same. Customer have the 'Pay Now' option for the failed invocies which once paid the workspaces status becomes ACTIVE and can be used again.

alt_text

Invoice contains the details like total api consumed, domain charges, discount if applicable and taxes and email will be sent to the registed user email id.

The invoice amount will be charged on the same day of invoice generation. If Invoice amount is not paid with 3 attempts, then the workspace will be marked as BLOCKED.

Once the invoice amount is paid through Pay Now option, then workspace status will be changed to ACTIVE.

Debug Issues

Using SmartView

Using SmartAssist

Cognitive QnA Issues

Classifier Issues

NLP Issues

NLU Issues

Context Issues

Small Talk Issues

Improve Accuracy

Using SmartAssist

Finetuning Settings

Manage Updates & Upgrades

Use Pre Built Domains

Manage Ontology

Migrate from other Platforms

Migrating from Watson

Migrating from Luis

Migrating from Api.ai

Pricing

Plan Pricing details

Tables FREE Free for 30 days BASIC $300.00 for 20 ,$1.00 Per API Billed Monthly PREMIUM $100.00 for 10 $1.00 Per API Billed Monthly
Small Talk Y Y Y
FAQs Y Y Y
Model Updates Monthly Monthly Fortnightly
Training Cycles 20/Month 30/Month Unlimited
Fullfilment Nodes Channel & Conversational Workflow runtime Shared Shared Dedicated
Auto Scale Hardware Capacity Scales based on user traffic N N Y