Configuration
These are the configurations that are used in the Wingman plugin.
{
"id": "io.infomaker.wingman",
"name": "im-wingman",
"style": "https://plugins.writer.infomaker.io/v1/infomaker/im-wingman/1.2.0/style.css",
"url": "https://plugins.writer.infomaker.io/v1/infomaker/im-wingman/1.2.0/index.js",
"mandatory": false,
"enabled": true,
"data": {
"host": "https://ai-eu-west-1.saas-prod.infomaker.io",
"contextLimit": 2,
"encryptedKeyPhrase": "somePhrase",
"widgets": [
"generic",
"headline",
"summary"
],
"widgetConfig": {
"headline": {
"digital": {
"preText": "Generate a headline",
"creativity": 5,
"headlineCount": 10,
"digitalHeadlineWordCount": 20,
"checkBoxDefault": false,
"providerAccessToken": "someToken",
"serviceProvider": "openai",
"modelId": "gpt-3.5-turbo"
},
"print": {
"preText": "Suggest headlines for the article provided in XML tag. If the article has a strong local connection, reflect this in the headline.",
"creativity": 5,
"headlineCount": 10,
"printHeadlineWordCount": 8,
"checkBoxDefault": false,
"providerAccessToken": "",
"serviceProvider": "Bedrock",
"modelId": "anthropic.claude-3-sonnet-20240229-v1:0"
}
},
"summary": {
"digital": {
"preText": "Generate a summary",
"creativity": 5,
"summaryCount": 5,
"digitalSummaryWordCount": 40,
"checkBoxDefault": false,
"providerAccessToken": "someToken",
"serviceProvider": "openai",
"modelId": "gpt-3.5-turbo"
},
"print": {
"preText": "Suggest summaries for the article provided in XML tag. Act as a news editor and your task is to suggest a summary for the article.",
"creativity": 5,
"summaryCount": 5,
"printSummaryWordCount": 100,
"checkBoxDefault": false,
"providerAccessToken": "",
"serviceProvider": "Bedrock",
"modelId": "anthropic.claude-v2:1"
}
},
"generic": {
"digital": {
"preText": "Generate a headline",
"creativity": 5,
"checkBoxDefault": false,
"providerAccessToken": "someToken",
"serviceProvider": "openai",
"modelId": "gpt-3.5-turbo"
},
"print": {
"preText": "You are a News Editor of a News Firm and your task is to suggest headlines for the article provided in XML tag.\n\nPlease use professional tone while generating headlines.",
"creativity": 5,
"checkBoxDefault": false,
"providerAccessToken": "",
"serviceProvider": "Bedrock",
"modelId": "anthropic.claude-3-sonnet-20240229-v1:0"
}
}
}
}
}
Configuration Details:
"data": {
"host": "https://ai-eu-west-1.saas-prod.infomaker.io",
"contextLimit": 2,
"encryptedKeyPhrase": "somePhrase",
"widgets": [
"generic",
"headline",
"summary"
],
contextLimit
: Max. limit of context tags that can be sent in the prompt.
encryptedKeyPhrase
: Encryption Key Phrase for sending encrypted API Key.
widgets
: Widgets that should be enabled in Wingman plugin
Headline Configuration Details:
"widgetConfig": {
"headline": {
"digital": {
"preText": "Generate a headline",
"creativity": 5,
"headlineCount": 10,
"digitalHeadlineWordCount": 20,
"checkBoxDefault": false,
"providerAccessToken": "someToken",
"serviceProvider": "openai",
"modelId": "gpt-3.5-turbo"
},
"print": {
"preText": "Suggest headlines for the article provided in XML tag. If the article has a strong local connection, reflect this in the headline.",
"creativity": 5,
"headlineCount": 10,
"printHeadlineWordCount": 8,
"checkBoxDefault": false,
"providerAccessToken": "",
"serviceProvider": "Bedrock",
"modelId": "anthropic.claude-3-sonnet-20240229-v1:0"
}
},
PreText
: This is the field where the user will send the custom prompt to generate the headline.
*Note: In case of AWS Bedrock please end your custom prompt with these words - ‘in the XML tag’
Do not add any these words for ChatGPT models.
creativity
: This determines the temperature, between 1-5, which defines randomness in result. 1
being least to 5 being highest.
headlineCount
: The total headline results to generate.
digitalHeadlineWordCount
: The headline result word count (approx.) to generate for digital headline
printHeadlineWordCount
: The headline result word count (approx.) to generate for print headline
checkBoxDefault: false
: By default check box should be checked or not
providerAccessToken
: *Currently applicable in case of ChatGPT where if an organization want to use their own API Key for using ChatGPT service instead of Naviga's (if available)
serviceProvider
: Specify the AI service provider for prompt ( openai for ChatGPT & Bedrock for AWS).
modelID
: For a particular AI service provider the model to be used for the prompt.
Summary Configuration Details:
"summary": {
"digital": {
"preText": "Generate a summary",
"creativity": 5,
"summaryCount": 5,
"digitalSummaryWordCount": 40,
"checkBoxDefault": false,
"providerAccessToken": "someToken",
"serviceProvider": "openai",
"modelId": "gpt-3.5-turbo"
},
"print": {
"preText": "Suggest summaries for the article provided in XML tag. Act as a news editor and your task is to suggest a summary for the article.",
"creativity": 5,
"summaryCount": 5,
"printSummaryWordCount": 100,
"checkBoxDefault": false,
"providerAccessToken": "",
"serviceProvider": "Bedrock",
"modelId": "anthropic.claude-v2:1"
}
},
PreText
: This is the field where the user will send the custom prompt to generate the summary.
*Note: In case of AWS Bedrock it is recommended to end your custom prompt with these words - ‘in the XML tag’
Do not add any these words for ChatGPT models.
creativity
: This determines the temperature, between 1-5, which defines randomness in result. 1
being least to 5 being highest.
summaryCount
: The total summary results to generate
digitalSummaryWordCount
: The summary result word count (approx.) to generate for digital summary
printSummaryWordCount
: The summary result word count (approx.) to generate for print summary
displayCount
: Display the number of results before show more button
checkBoxDefault: false
: By default check box should be checked or not
providerAccessToken
: *Currently applicable in case of ChatGPT where if an organization want to use their own API Key for using ChatGPT service instead of Naviga's (if available)
serviceProvider
: Specify the AI service provider for prompt ( openai for ChatGPT & Bedrock for AWS).
modelID
: For a particular AI service provider the model to be used for the prompt.
Generic Configuration Details:
"generic": {
"digital": {
"preText": "Generate a headline",
"creativity": 5,
"checkBoxDefault": false,
"providerAccessToken": "someToken",
"serviceProvider": "openai",
"modelId": "gpt-3.5-turbo"
},
"print": {
"preText": "You are a News Editor of a News Firm and your task is to suggest headlines for the article provided in XML tag.\n\nPlease use professional tone while generating headlines.",
"creativity": 5,
"checkBoxDefault": false,
"providerAccessToken": "",
"serviceProvider": "Bedrock",
"modelId": "anthropic.claude-3-sonnet-20240229-v1:0"
}
}
}
}
}
PreText
: This is the field where the user will send the custom prompt to generate the generic results.
*Note: In case of AWS Bedrock it is recommended to end your custom prompt with these words - ‘in the XML tag’
Do not add any these words for ChatGPT models.
creativity
: This determines the temperature, between 1-5, which defines randomness in result. 1
being least to 5 being highest.
checkBoxDefault : false
: By default check box should be checked or not
providerAccessToken
: *Currently applicable in case of ChatGPT where if an organization want to use their own API Key for using ChatGPT service instead of Naviga's (if available)
serviceProvider
: Specify the AI service provider for prompt ( openai for ChatGPT & Bedrock for AWS).
modelID
: For a particular AI service provider the model to be used for the prompt.