Requests are made using GET or POST data submissions to the API entry point. Typically, a POST method is recommended in order to overcome the parameter maximum length limit associated to the GET method.
This is the endpoint to access the API.
Service | Method | Url | ||
---|---|---|---|---|
Sentiment Analysis | POST | https://api.meaningcloud.com/sentiment-2.1 | Console |
If you are working with an on-premises installation, you will need to substitute api.meaningcloud.com by your own server address.
These are the supported parameters.
Name | Description | Values | Default |
---|---|---|---|
key |
The access key is required for making requests to any of our web services. You can get a valid access key for free just by creating an account at MeaningCloud. |
Required | |
of |
Output format. |
xml json |
Optional. Default: of=json |
lang |
It specifies the language in which the text must be analyzed. |
See models and language section for further information. | Required |
ilang |
It specifies the language in which the values returned will appear (in the case where they are known). Check the response section to see which fields are affected. |
en: English es: Spanish it: Italian fr: French pt: Portuguese ca: Catalan da: Danish sv: Swedish no: Norwegian fi: Finnish |
Optional. Default: same as lang |
verbose |
Verbose mode. When active, it shows additional information about the sentiment analysis specifically, it shows the changes applied to the basic polarity of the different polarity terms detected. |
y: enabled n: disabled |
Optional. Default: verbose=n |
txt |
Input text that's going to be analyzed. |
UTF-8 encoded text (plain text, HTML or XML). | Required |
txtf |
The text format parameter specifies if the text included in the txt parameter uses markup language that needs to be interpreted (known HTML tags and HTML code will be interpreted, and unknown tags will be ignored). |
plain markup |
Optional. Default: txtf=plain |
url |
URL with the content to analyze. Currently only non-authenticated HTTP and FTP are supported. The content types supported for URL contents can be found here. |
Optional. Default: url="" |
|
doc |
Input file with the content to analyze. The supported formats for file contents can be found here. |
Optional. Default: doc="" |
|
model |
Sentiment model chosen. |
See models and language section. | Optional. Default: model=general |
egp |
Expand global polarity. This mode allows you to choose between two different algorithms for the polarity detection of entities and concepts. Enabling the parameter gives less weight to the syntactic relationships, so it's recommended for short texts with unreliable typography. |
y: enabled n: disabled |
Optional. Default: egp=n |
rt |
This parameter indicates how reliable the text to analyze is (as far as spelling, typography, etc. are concerned), and influences how strict the engine will be when it comes to take these factors into account in the analysis. |
y: enabled for all resources u: enabled just for user dictionary n: disabled |
Optional. Default: rt=n |
uw |
Deal with unknown words. This feature adds a stage to the topic extraction in which the engine, much like a spellchecker, tries to find a suitable analysis to the unknown words resulted from the initial analysis assignment. It is specially useful to decrease the impact typos have in text analyses. |
y: enabled n: disabled |
Optional. Default: uw=n |
dm |
Type of disambiguation applied. It is accumulative, that is, the semantic disambiguation mode will also include morphosyntactic disambiguation. |
n: no disambiguation m: morphosyntactic disambiguation s: semantic disambiguation |
Optional. Default: dm=s |
sdg |
Semantic disambiguation grouping. This parameter will only apply when semantic disambiguation is activated ( |
n: none g: global intersection t: intersection by type l: intersection by type - smallest location |
Optional. Default: sdg=l |
cont |
Disambiguation context. Context prioritization for entity semantic disambiguation. See context disambiguation for a more in depth explanation. |
Optional. Default: cont="" |
|
ud |
The user dictionary allows to include user-defined entities and concepts in the sentiment analysis. It provides a mechanism to adapt the process to focus on specific domains or on terms relevant to a user's interests, either to increase the precision in any of the domains already taken into account in our ontology to include a new one, or just to add a new semantic meaning to known terms. Several dictionaries can be combined separating them with |
Name of your user dictionaries. | Optional. Default: ud="" |
The fields txt
, doc
and url
are mutually exclusive; in other words, at least one of them must not be empty (a content parameter is required), and in cases where more than one of them has a value assigned, only one will be processed. The precedence order is txt
, url
and doc
.
Sentiment models are defined for a particular language. The sentiment analysis uses a morphosyntactic analysis, which is directly directed to the language and the reason why the lang
parameter is required.
You will only be able to analyze a text in a particular language if the lang
parameter and the lang of the model are the same, otherwise an error will be raised.
The lang
parameter has a new value with special behavior: when you set it to auto
, it will automatically detect the language of the text, and try to find a model with the name you entered in the model
parameter and the language detected in the text. If it finds a model that fits, the sentiment analysis will be carried out using that model, otherwise, an error will be raised.
This functionality is specially useful when working with a multilingual set of texts as it allows you to define a single request to the API without having to change its parameters depending on the text.
MeaningCloud currently supports a generic sentiment model (called general
) in the following languages (lang
parameter):
You can define your own sentiment models using the user sentiment models console and work with them in the same way as with the sentiment models we provide.
If you don't have access yet to the nordic pack you can request it easily from the developers home. You can read more about it in this post.