Azure AI Notes

Azure AI

What is AI?

  • Software that copies behavior actions of humans.
1. Machine Learning:
  • Data is used to train models so that they can make predictions.
2. Anomaly Detection:
  • Over time, uses data analysis to detect any deviations.
3. Computer Vision:
  • How AI sees the world.
  • Examines images or video from a camera.
  • Examples: Facial recognition.
4. Natural Language Processing:
  • AI that understands written and spoken language.
  • Includes speech recognition and speech synthesis.
5. Conversational AI:
  • AI that can chat and converse with humans.

Responsible AI:

  • Fairness
  • Reliability and Safety
  • Privacy and Security
  • Inclusiveness
  • Transparency
  • Accountability

Azure AI Demonstrations

Computer Vision:
  • Can analyze and describe images, providing a confidence ranking.
Language Understanding:
  • Can reflect user input in a demo.
Text Analytics:
  • Can analyze sentiment and extract key phrases.

Machine Learning (Chapter 3)

  • Uses math and statistics to create a model that can predict unknown values.
Datasets:
  • A collection of data used to train a Machine Learning Model (MLM).
  • Features are used to predict labels.
Azure Services:
  • Azure Automated Machine Learning: User picks dataset and label, and the service automates the rest.
  • Azure Machine Learning Designer: Create a pipeline to train a machine learning model without code.
Types of Machine Learning:
  1. Regression:

    • Use historical data to predict a numerical label.
    • Type: SUPERVISED LEARNING
  2. Classification:

    • Use historical data to predict a category or class.
    • Type: SUPERVISED LEARNING
  3. Clustering:

    • Group similar items into clusters based on their features.
    • No label is used.
    • Type: UNSUPERVISED

Training, Validation, and Data Sets

  1. Data Prep:

    • Cleaning data, normalization.
  2. Training our models:

    • Picking which model to train.
  3. Validation:

    • Use split data module to compare actual and predicted values.
  4. Evaluation:

    • Using R2R^2 and root mean squared.

Classification Models in Azure ML designer:

Confusion Matrix:
  • Top-Bottom = Predicted Model
  • Left-Right = Actual Values
  • +,+ = 1 (True Positive)
  • -,- = 0 (True Negative)
  • +,- = Predicted diabetic but not actually (FALSE POSITIVE)
  • -, + = Predicted no diabetic, but actually (FALSE NEGATIVE)

Inference Pipeline:

  • Used to deploy a model and start making predictions.
  1. Realtime:

    • Few predictions
  2. Batch:

    • Many predictions
  • Models can be deployed to Azure Kubernetes.

Clustering Models

  • Clustering Models for regression vs classification
Pipelines:
  • Use k-means clustering option.
  • Modify centroids to change the number of clusters.
  • Instead of Train Model, use Train Clustering Model.

Computer Vision (Chapter 4)

  • Computer Vision: How AI sees the world, uses pre-built computer vision models.
  • Image descriptions returned from the Computer Vision service will contain a sentence or phrase along with a confidence score.
Azure Services:
  1. Computer Vision: Interact with the service via API.
Capabilities:
  1. Description

  2. Tagging

  3. Object Detection:

    • A bounding box and probability score are returned when detecting objects or faces in an image.
  4. Brand Detection

  5. Facial Detection

  6. Categorize (86 pre-defined categories)

  7. Landmarks and Celebrities

  • Cognitive Services: Uses CV and there AI services
Custom Vision:
  • Image classification: Can label each image
  • Object Detection: Can label each object along with a position of each object.
Azure Face Service:
  • Returns more detailed information compared to computer vision.

    • Facial Analysis: Analyze face for various attributes.
    • Facial Recognition: Model recognizes someone based on their features.
Reading Text from an Image:
  • OCR (Optical Character Recognition):

    • Used to read handwritten text or printed documents.
    • Use computer vision for text from an image.
  • Read API is used to read small amounts of text.

Form Recognizer:
  • Read texts from images of receipts, invoices, and forms

  • It can interpret different fields using OCR

  • OCR vs Forms Recognizer:

    1. With OCR, we would still need to sort out the details and match fields to the values.
    2. Forms Recognizer automatically matches the fields to the values.
  • Azure Custom Vision can be used to create a model for either image classification or object detection using your own images.

  • Azure Computer Vision can be used to analyze images, suggest tags, read text from images, detect objects, detect brands, detect landmarks, perform basic facial detection (the approximate age, if there is a face in the image), and more. It does not allow for advanced facial detection.

Natural Language Processing (Chapter 5)

  • NLP: Area of AI that can understand written and spoken language.
Azure Services:
  • Text Analytics, Speech, Translator, and Language Understanding.
Text Analytics:
  • Best used to examine and evaluate text for content, positivity, and keywords.

    1. Language Detection: Detect if it is in a different language, returns unknown language or NaN if so.
    2. Sentiment Analysis: Determining if the text is positive or negative, 1 if high positive 0 if negative.
    3. Entity Recognition: Returns list of entities in the text and the category they're assigned to. Return entities, categories, and confidence scores
Speech Recognition:
  • STR (Speech to Text): Taking spoken language and converting it into data or text, using speech to text API.
Speech Synthesis:
  • TSS (Text to Speech): GPS applications, voice menus, or broadcasting announcements, using text to speech API.
Text Translation:
  • Translation text or doc to 1 langue to another, supports 60 languages, can filter profanity content
Speech Translation:
  • Translate spoken language —> speech to speech —> automatic translation —> speech to text
Azure Services
  1. Translation Text Service (t-t)
  2. Azure Speech (s-t) or (s-s)
  3. Azure cognitive services
Language Understanding:
  • AI system should be able to understand what we are trying to communicate.

    1. Utterance: Phrase we say to AI.

    2. Entities: An item which phrase is referring to.

    3. Intent: Purpose or goal of the phrase

      • Example:

        • "Turn on the living room lights" = Utterance
        • Lights is item
        • Turn on is intent

Azure Services

  1. Language understanding: Authoring or prediction standalone services
  2. Only supports predictions

Conversational AI

  • AI systems that can chat with humans
Knowledge Base:
  • List of question and answer pairs, database that has all of your questions and answers
Bot Service:
  • Provides interface to the knowledge base.
Cognitive Services for Language:
  • How we create question and answer knowledge base we can interact with it via REST api, software development kit, or the language studio

    1. Generate QnA Paris from existing FAQ document
    2. Import them from a predefined chit chat data source
    3. Input the question and answers manually
Azure Bot Service
  • How we can deliver our knowledge to our end users

    1. Once we deploy knowledge, we can create a bot
  • We can create interface via web, email. Microsoft team, and more

  • Power Virtual Agents are another way to create a no-code bot