Azure ML uses model evaluation for the measurement of the trained model accurancy.
For classification models, the evaluate model module provides the following five metrics:
1. Accurancy
2. precision
3. Recall
4. F1 score
5. Area under curve ( AUC ).
Azure ML uses model evaluation for the measurement of the trained model accurancy.
For classification models, the evaluate model module provides the following five metrics:
1. Accurancy
2. precision
3. Recall
4. F1 score
5. Area under curve ( AUC ).
two data sources for the import data modules in the Azure ML Designer.
datastore and URL via http.
Recall metrics define how many positive cases that model predicted are actually predicted right . We can calculate this metric using the following formula
TP/( TP + FN )
what is the expression for model precision value calculation ?
Three main authoring tools:
1. Notebooks.
2. Designer.
3. Automated ML
supports two models of bot integration with agent engagement platforms like customer support service.
These two models are Bot as agent and Bot as proxy.
Bot as agent model integrates bot on the same level as live agents. The bot is engaged in interactions the same way as customer support personnel.
Handoff protocol regulates bot's disengagement and a transfer of user's communication to a live person.
This is the most straight forward model to implement.
Bot as proxy model integrates bot as the primary filter before the user interacts with a live's agent.
Bot's logic decides when to transfer a conversation and where to route it. This model is more complicated to implement.
Two API's
1. Text-to-Speech
2. Speech-to-Text
this also includes the speech recognition and speech synthesis.
Elements:
1. Entities.
2. Utterances.
3. Intents.
we can achieve this by using the Azure cognitive service LUIS portal.
Entity is the word or phrase that is the focus of the utterances, as the word 'light' in the utterance "Turn the lights on".
Intent is the action or task thats the user wants to execute. It reflects in utterances as a goal or purpose.
we can define intent as "Turn On" in the utterance "Turn the lights on".
Utterance is the user's input that your model needs to intrepret, like "Turn the lights on" or "Turn on the lights".
The residual histogram presents the frequency of residual value distribution.
Residual is the difference between predicted and actual values.
It represents the amount of error in the model.
If we have a good model, we should expect that the most of the errors are small. They will cluster around 0 on the Residual histogram.
1. used to crack the sidecar pattern.
2. o11y.
3. security
4. new version.
what is fallacies means ?
Fallacies means Misconceptions.
1. The network is reliable.
2. Latency is zero.
3.Bandwidth is infinite.
4. The network is secure.
5. Topology does not change.
6. There is one administrator.
7. Transport cost is zero.
8. Network is homogeneous.
The netflix OSS is used mainly for the Java and Springboot programming frameworks.
this is a very heavy application management.
The polyglot microservices allow developers to pick a programming language of their choice in order to build products in more efficient ways
6 principles of responsible AI solution
Fairness
Reliability
Safety
Privacy & Security
Transparency
Inclusiveness & Accountability
The principles of Inclusiveness directs AI solutions to provide their benefits to everybody with out any barriers and limitations.
Microsoft defines the three inclusive design principles:
1. Recognize exclusion
2. solve for one, extend to many.
3. learn from diversity.
Azure Cognitive services can you use to build the Natural language processing solutions:
NLP is one of the key elements for AI , it includes four services,
1. Text Analytics : helps analyze text documents , detect documents language, extract keyphrases , determine entities and provide sentimental analysis.
2. Translator text : helps translate between 60+ languages.
3. speech helps recognize and synthesis speech , recognify and identify speakers , translate live or recorded speech.
4. LUIS : helps to understand voice or text commands.
KNOWLEDGE BASE FOR QnA service:
The size of the indexes.
Cognitive search pricing tier limits and QnA Maker limits.
size depends on these two services.
The telephone voice menus functionality is a good example of a speech synthesis service.
how to identify the services of the live speech translation ?
live speech translation involving the following sequence of the services during the real-time audio
stream ---> speech-to-text --------> speech correction -----> Machine translation -----> text to speech
what are the services provided by text analytics ?
Text analytics is a part of the Natural Language processing,
it includes the following services:
1. sentimental analysis
2. key phrase detection
3. entity detection
4. language detection
Speech recognition is a part of speech service.
It uses different models,
Two common models:
Essential : Acoustic model and language model.
Acoustic model - helps convert audio into phonemes.
Language model - helps to match phonemes with words.
Three key fields that form recognizer service extracts from the common receipts.
Form recognizer service is one of the Azure computer vision solutions additional to computer vision service, custom vision service and face service.
Form recognize service uses pre-build receipt models to extract such information from receipts : date of transaction, time of transaction, merchant information , taxes paid and receipt total.
the service also recognizes all the texts and returns it.
Read the text in the image.
Detects objects.
Identifies landmarks.
categorize image.
Notes:
Computer vision service is one of the main areas of AI. It belongs to the group of Azure computer vision solutions such as computer vision service, custom vision service, face service and form recognizer.
computer vision service with images. This service brings sense to the image pixels by using them as feature as ML models.
These predefined models help categorize and classify images. detect and recognize objects, tag and identify them.
computer vision can read a text in images in 25 languages and recognize landmarks.
When the application process images, it uses semantic segmentation to classify pixels that belong to the particular object ( in our case, flooded areas ) and highlights them.
You use your camera to capture a picture of the product. An application identifies this product utilizing image classification model and submits it for a search.
The image classification model helps to classify images based on their content.
After we ingest the data,. we need to do a data preparation or transformation before supplying it for model training.
There are four typical steps for data transformation such as Feature selection,
Finding and removing data outliers.
Impute missing values and Normalize.
Need to split the data into two sets:
The first is for the model training.
and the second is for model testing.
Note: just in case if we are using the automated machine learning, it does it for us automatically as a part of the data preparation and the model training.
The clustering is a machine learning form that groups items based on common properties.
The most common clustering algorithm is k-means clustering.
It is like, answer the binary yes or no question. you can achieve this by creating a classification model based on the data from the historical reviews.
what are the algarithms that is used for the classification model using ML ?
all algorithms in the ML classification family include the word "class" in their names like
1) Two-class logistic regression
2) multi-class logistic regression
or
3) multiclass decision forest
extra info for the algorithms:
similar example for the regression :
regression algorithm family have the word regression in their names without the class, like linear regression or decision forest regression
and there is only one algorithm called the k-means clustering is a clustering algorithm.
fairness
reliability
safety
privacy
security
transparency
inclusiveness
accountability
Fairness:
The principle of fairness directs AI solutions to treat everybody fairly. Independently from gender, race or any bias.
Accountability:
The principle of accountability directs AI solutions to follow governance and organizational norms.
Semantic segmentation is an advanced machine learning technique in which individual pixels in the image are classified according to the object to which they belong.
Azure Bot service serves as data input for virtual assistant.
how to extend the capabilities of your Bot ?
Using Bot Framework skills, you can easily extend the capabilities of your Bot. Skills are like standalone bots that focus on a specific function like calendar to do, point of interest etc.
In the virtual assistant design, Bot Framework dispatches actions to skills.
Components of Chat Bot
To create a Web Chat Bot:
you need just two components: knowledge base and Bot service.
knowledge base:
we can create a knowledge base from website information or FAQ documents, etc.
usually knowledge base is a list of question and answer pairs.
Bot service provides an interface to interact with a knowledge base from different channels.
four types of entities that you can create during the authoring of LUIS application.
During an authoring phase for a LUIS application. we need to create intents, entities and train a model. There are four types of entities that we can create:
Machine-learned
list
RegEx
pattern.
Main features and capabilities of Azure Machine learning
Azure machine learning is the foundation for AI , it includes the four features and capabilities.
Automated Machine learning : automated creation of ML models based on your data; does not require any data science experience.
azure machine learning designer : a graphical interaface for no code creation of the ML solution.
data and compute management : cloud based tools for cloud professionals.
pipelines : visual designer for creating ML tasks workflow.
If you create a cognitive service to train and publish the custom vision model, you can provide a cognitive service endpoint and cognitive service key to the developers for access to the model.
But if you use the custom vision portal or create a custom vision resource within cognitive service, you will have two separate resources for training and publishing a model. In this case, you need to provide the four pieces of information to the developers:
ProjectID
Model name
Prediction key
Prediction endpoint
READ api :
is part of computer vision services. It helps to read 'text' within predominantly document images. READ api is an sychronous service specially designed for the heavy on text images or documents with a lot of distortions.
It produces a result that includes: page information for each page including page size and orientation;
information about each line on the page and information about each word in each line including bounding box of each word as indication of the word position in the image.
compute instances
compute clusters
inference clusters
attached compute.
what are the benefits of the object detection model ?
object detection is the form of ML that helps to recognize objects on the images. Each recognizable object will be putting in the bounding box with the class name and probability score.
Data pre-processing that involves various techniques like me scaling, normalization or feature engineering etc calls.
The confusion matrix provides a tabulated view of predicted and actual values for each class.
eg: if we are predicting the classification for 10 classes , our confusion matrix will have 10 x 10 size.
Feature selection helps us to narrow down the features that are important for our label prediction and discard all the features that dont play or play a minimal role in a label prediction . As a result our training model and prediction will be more efficient.
Five key elements of the Artifical Intelligence:
Machine learning: The foundation of AI SYSTEMS.
Anomaly detection : Tools and services for identification of the unusual activities.
computer vision : tools and services for understanding and recognizing objects in images, videos , faces and texts.
Natural language processing : tools and services for language understanding : text, speech, text analysis, and translation.
conversational AI : tools and services for intelligent conversation.
what are the six principles of responsible AI ?
Fairness, Reliability and safety, transparency , inclusiveness, accountabillity.
==================================================================
how to predict numeric prediction ?
we can achieve this by creating a regression model based on the historical data of the things from previous quarters.
there are two types of machine learning :
1) supervised machine learning.
there are two parts of supervised machine learning : a) regression b) classification modelling types.
( this is the right one for the above ).
2) unsupervised machine learning.
only clustering model is related to the unsupervised machine learning.