AZ-204 Developing Solutions for Microsoft Azure exam preparation (Developer Associate certification)

I recently passed the certification exam for the AZ-204 Azure Developer Associate, so in this post I will summarize important concepts that you should know for the exam and explain how it went.

Please note that this is NOT meant to be a comprehensive course to pass the exam but more a list of facts that you should know/be aware of. If some concepts in this list are unknown to you, or you didn’t know some information, you’d know what topic to study a bit further.

How I studied for the AZ-204 certification exam

  • First step is of course the official courses on Microsoft Learn that are free. Plus you should know that there are regularly challenges were you can get 50% or 100% off your certification exam price if you complete the learning path in 30 days! On Microsoft Learn, you get theory and labs, plus quick checkpoints with a few questions to test your knowledge.
  • Then I spent hours on Pluralsight. There is a path dedicated to the certification, but you have to be very careful because some of the courses were not created on that purpose. Result is that some of the content is useless for the certification exam (good to know if you’re short on time…). Make sure to check the list of required topics for the exam. The problem is also that courses quality is inconsistent on that platform, because they are made by different trainers.
    On Pluralsight there are level checks for tons of subjects, including Microsoft Azure topics, which can be useful. But again, they’ll go beyond what you have to know because it’s not certification-oriented. There is a new feature on Pluralsight for Certifications with mock exams: they are totally not worth it, the questions (the way of questioning as well as the content) are totally irrelevant.
  • Finally, the day before the exam, I wanted to test myself with a mock exam. I found out that there are some on Udemy. I found a course which is really well done, including mock questions (sorry, that course was retired since then). I don’t think it’s sufficient by itself but in addition to the Microsoft Learn material it’s really worth it.
    Here’s a tip if you sometimes buy courses on Udemy: you can get some cashback (partial refund) with iGraal.

One general advice: pay really good attention to the date of the trainings you follow outside of the ones proposed by Microsoft, as Azure services evolve regularly.

Profile of the Microsoft Certified Azure Developer Associate

According to Microsoft, you should have at least 1,5 years hands-on experience with Azure and in development to complete this path.

I do have a bit more than 1,5 years experience in development, but way less with Azure. I integrated some services of Azure in my graduation work, so I’ve worked with it for a few months (I would say a big 6 months), mostly with Blob storage (which is part of the certification) and PostgreSQL server (which is not). I also made a lot of researches on topics like API management, App Services… for this graduation work. I had done very little on Azure in a professional environment at the time I passed the certification.

It’s really challenging to pass the certification with little experience on Azure because you have to remember a lot of information on a variety of subjects. If you are used to working with it, you remember things without really having to study them. Therefore it’s really important to do the labs and take advantage of the 1-month trial to test the services.

Also, if you have to push too much on the studying, it might mean you’re not ready for it and should get some hands on and experience on the matter. Studying only makes sense if you’re actually able to use what your learnt afterwards. Otherwise, although the certification might help you get interviews or a job, you would be dismissed as soon as people realize you can’t actually do

Open book certification

Since September, 2023, role-based certifications like this one are “Open book” exams. It doesn’t mean you can go to the Internet or have your own notes, but you will have access to Microsoft Learn content during the exam. The time of the exam wasn’t extended, so it means you still have to know your topics, but you won’t need to remember very specific information like maximum sizes or tier names by heart.

It’s a great news since it’s way closer to a work-like situation, without making those certifications too easy and worthless since the time is still very short and you wouldn’t make it if you have to search every single answer, or even don’t know what to look for.

Read the official Microsoft Azure article on Open book certifications.

How does the exam work

Currently, the exam is made of 53 questions to answer in 100 minutes. There were like 15/20 minutes remaining when I finished it, knowing that I’m usually pretty fast, that I didn’t hesitate on many questions, and that I didn’t review any question.

The exam has 3 parts:

  • Your usual certification format: questions with either multi-choice answer, multi-selection answers (you’ll know how many you have to pick), drag-and-drop answers and dropdown lists. There were 39 questions like this if I remember well.
  • A “Does the solution work” section. Here you’ll see several times the same problem description, but with different ways to implement a solution (different Azure services basically). You just have to select “yes” or “no” to say if the solution fits the problem. The particularity here is that you can’t go back once you answered.
  • A use cases section, where you get a detailed overview of a problem (“A company of that field wants to do this, their requirements are…, this kind of security must be ensured, they will integrate those services…”), information being spread on several pages (not full text pages, but still… keep enough time for this last section!). Based on the information, you’ll have several questions to answer (same format as the first section, mostly multi-choice) either on security, networking, data management, services…

After each section of the exam, you can review your answers before starting the next one.

Another interesting thing to know is that when you start, you are asked to choose the programming language in which development questions will be shown: either in C# or Python (from what I had read, I assumed there would be no choice and C# would be the language used). I’ve never developed in C# and I know really little in Python, so I chose… C#, because as a Java developer I thought it would be the easier to read. Most courses show code examples in C#/.NET which I was too lazy to read because I don’t intend to work with those languages, but anyway the questions are oriented to using the SDKs of Azure services, not to knowing the programming language.

Also note that when the question contains several answers to give and when relevant, each answer counts for 1 point. It means you can grab points even when you don’t give a correct global answer. It’s written under the question when this is the case.

What kind of questions you should be able to answer

This is a non-comprehensive list of the kind of questions you could see at the exam:

  • What is the best service/the best solution to solve a problem, based on various requirements (price optimization, security, redundancy…). This includes in some cases being able to choose the best tier. It’s important to understand the concept of best choice: it means several solutions may solve the problem but one is optimal, and you’ll know thanks to the requirements in the questions.
  • Complete code snippets. They can be C# (or Python according to your choice) snippets where you have to fill in class names, method signatures…, yaml files for app/deployment configuration, PowerShell scripts, ARM templates or policies definitions (be able to fill in the section titles, parameters names… or to place values at the right place), SQL queries (for CosmosDB)… I had several code snippets and policies files, two PowerShell scripts, the others came once.
  • Set the steps of a process in order: you get a list of 6 or 7 actions, you have to choose the 3 actions that are needed to answer the request and place them in the order of execution.
  • Drag-and-drop or dropdown questions on random knowledge (for example, be able to identify the trigger, the inbound and outbound of a Function, or select the rights configurations for a service).

On the whole, I didn’t find the questions tricky: if you know your subject, you quickly see what you have to answer. The most difficult to me is knowing by heart how to fill some code snippets or configuration files.

Exam revision: My summary of the key information to remember

Please again use this as a revision material more than a study material: the most up-to-date and complete source will always be Microsoft Learn, and as any human I could have gotten things wrong. This is more a way to find out if you missed some topics during your study, and then search for more information on that topic.

I didn’t write all CLI and PowerShell commands but you should know the commands (not the all list of parameters) for all resources. Especially those for which automation/scripting is likely to be used.

Develop Azure Compute Solutions

Implement IaaS Solutions

Provision Virtual Machines, use ARM templates, configure container images, public an image to Azure Container Registry (ACR), run containers with Azure Container Instance (ACI).

  • VMs can be created with the portal but programmatic creation allows for consistency in deployments, automation, creation of slots for test/dev environments… For this, you use ARM templates, CLI (az vm create) or PowerShell (New-AzVM).
  • Ensure remote access port is open (RDP 3389, SSH 22) and retrieve public IP address (az vm open-port, az vm list-ip-addresses; -OpenPorts parameter in New-AzVM, Get-AzPublicIpAddress).
  • ARM templates are defined in JSON, can create any resource, build/export from the portal or write your own; deploy from the quickstart library.
  • ACR = Docker based service.
    az acr create
    az acr build –image –registry or docker push -> build image and push to registry

Create Azure App Service Web Apps

Create an Azure App Service Web App, enable diagnostic logging, deploy code to a web app, configure the settings, implement autoscaling rules.

Implement Azure Functions

Create and deploy Azure Functions including Durable Functions.

  • Because Functions are serverless, they need a Storage account. Functions exist inside a Function App.
  • Service plans:
    Consumption = serverless, pay-as-you-go (only when running)
    Azure App Service plan = avoid timeout (allow to run constantly) for scenarios where Durable Functions can’t be used, not really serverless, allow to use underutilized VMs in App Service
    Premium = pre-warmed, for long running functions
  • Function bindings
    Bindings can be input or output (or both!). Triggers are a special type of binding with the additional capacity of initiating execution. A Function has 1 trigger (no less no more) but can have 0-N other bindings.
    For the exam you should know the main characteristics/options of the pre-existing triggers (see Microsoft doc).
  • Durable Functions are an extension of Functions that enable long lasting, stateful operations, still stateless and consumption based.
    It allows to orchestrate long-running workflows.
    The service takes care of monitoring the synchronization and runtime concerns, dehydrate and rehydrate processes for cost efficiency…
    It is recommended to follow common workflow patterns, which can be helped with predefined templates:
    – function chaining (sequential calls),
    – fan out/fan in (parallel processes with final aggregation of the result),
    – async http APIs
    – monitor (loop looking for a change in state)
    – human interaction (if a human operation, like manuel validation, is required)
    They can be edited with Visual Studio or through a REST API but not with Azure Portal.
  • Azure Functions Core Tools allows to develop Functions locally. It allows to test the functions then publish them to Azure. Functions have to be created and managed either with Core Tools or the portal, no mixing. When publishing the Function App is stopped and the content is deleted before the new deployment, all functions must be in the same project, it’s not incremental. There is no link between the local projet and the Function App, the same project can be published multiple times to multiple targets.

Implement Azure Security

For security I really don’t feel comfortable about delivering information myself, but the course on Udemy I recommend in the intro of this article really helped me get the information that you should know for the certification. Just sayin’…

I would recommend to know how to register an app in AD, which workflows and strategies to use to grant access to human users, apps… based on various requirements. And the only place I could find clear and structured information about it is in the above mentioned course.

Connect to and Consume Azure Services and Third-party services

Implement API Management

  • Auto-scaling supported in Standard and Premium tier.
  • Authentication to Key Vault -> Managed identities (see Security).
  • Caching capabilities (see Caching/CDN).
  • Can export Logic App, Function App, Api app (App Service) and custom APIs with OpenAPI (Swagger), WADL (xml), WSDL (xml, SOAP) or blank API.
  • A single APIM deployment (= instance) can be distributed among different regions with Premium tier -> fail-over, less latency.
  • Configuration with policies ❕.
    Inbound = when request is received
    Backend = after request is received but before contacting the backend
    Outbound = response
    On-error = if any error caught
    Scopes of policies: global, production (= a collection of APIs), API, operation.
  • Pricing tier:
    – developer = for dev environment
    – basic = entry level production, 2 scale units
    – standard = 4 scale units
    – premium = multi-region deployment, 10 scale units per region
    – consumption = serverless, high availability, auto-scaling
  • API gateway url myApi.azure.api.net

About events and messages

  • Message: contains the data, contract between sender and receiver, guaranteed delivery.
  • Event: lighter weight (link), broadcast (sender doesn’t care about who decides to listen and how), subscription managed by an intermediary like Event Grid or Hub that will route to interested listeners, 0-N receivers, ephemeral.
  • In the exam, you have to be able to choose between all four solutions, or to choose best suited for event or messages.

Develop Event-Based Solutions

Azure Event Grid and Azure Event Hub.

  • Two types of events:
    Discrete = change of state, actionable (usually Event Grid)
    Series = condition, time-ordered and analyzable (usually Event Hub)
  • Event Hub is scalable and can ingest a large amount of data, which makes it a tool for Big Data, IoT… or any case where data has to be analyzed. Sender and receiver are decoupled.
    A Namespace is a container for event hubs, that’s also a scope for options.
    Partitions are buckets of events, time-ordered. There should be as many as their can be concurrent consumers. -> Buffering (default up to 24 hours). Every hub has 2-N partitions, each witch a separate set of subscribers.
    Supports at least once delivery.
    Events are deleted when retention time is out, not upon read.
    az eventhubs namespace create
    az eventhubs eventhub create
  • Event Grid is an event routing service that distributes events from different sources to different handlers.
    Simple, straightforward connection from source to subscriber, filtering, pay-per-event, retries delivery for up to 24 hours for each subscription, one event at a time.
    Supports at least once delivery.
    Handle streams of data, aggregation, analytics.

Develop Message-Based Solutions

Azure Service Bus and Azure Queue Storage Queues.

  • Queue storage is part of Azure Storage (like Blob storage), it lives inside a Storage account.
    The default TTL is 7 days, after which unread messages are deleted.
    Data redundancy LRS/ZRS/GRS/GZRS + RA-GRS and RA-GZRS (RA = read access).
    url storageAccountName.queue.core.windows.net/queueName
    Authentication with shared key, shared access key or Azure AD
    Visibility timeout = time frame between the delivery of the message and the hard deletion (message is still in the queue but not visible).
    az storage queue create
    az storage message peek -> read a message but don’t delete it
    az storage message get -> read and discard message
    There is a maximum of 32 messages per queue.
    Audit trail capability, track progress of a message inside the queue.
    Message lifetime <= 7 days.
    Part of Azure Storage -> use of the service features.
  • Service Bus = message broker.
    Can use queues and topics, which both live inside a Namespace.
    producer -> queue -> consumer
    publisher -> topics -> [filters] -> multiple subscribers
    Supports AMQP protocol.
    Supports ordering (FIFO), batching, duplicate detection, DLQ (dead letter queue) to hold unread messages.
    Basic tier does not support topics.
    Standard tier: pay-as-you-go, shared resources, auto-scaling, variable throughput and latency.
    Premium tier: redundancy, fix pricing per number of message units, dedicated resources, requires configuration of scaling rules, geo-disaster recovery and availability zones.
    url namespace.servicebus.core.windows.net/queueOrTopicName
    az servicebus queue/topic create
    Enterprise level (higher security requirements, dedicated infrastructure for messaging, multiple communication protocols and data contracts, on-prem and cloud).
    Queues: supports at-most-once delivery, FIFO guarantee, supports transactions, polling capability, RBAC, batches.
    Topics: multiple receivers, filtering.
    Message lifetime can > 7 days.
    Queue size <= 80 GB.
    Duplicate detection.

Develop for Azure Storage

Develop Solutions that use Cosmos DB Storage

Select the appropriate API/SDK, implement partitioning schemes and partition keys, set the appropriate consistency level, manage change feed notifications.

  • Consistency levels are important to understand, I got at least 2 questions with a notion of it that influenced the answer I had to give. CosmosDB is a distributed database, which means data is replicated in various areas. Consistency levels define what’s the policy for replication, based on the priority.
    There are 5 levels of consistency. When you diminish the consistency, you get less latency, more availability and more throughput, and less fees. The 5 next bullets are the consistency levels in order, from the highest consistency.
  • Strong consistency: Synchronous replication in real time, which means the user always sees the latest version of the data.
  • Bounded Staleness: Asynchronous replication. Staleness window defined by a number of writes or a time period: you have control over the “trigger” for replication. The order is guaranteed.
  • Session: Default behavior. Keeps data consistent inside the user session.
  • Consistent prefix: Data is not always current but consistency and order are guaranteed. It means if “A, B, C” was written, the user may see “A, B, C”, “A, B” or “A”.
  • Eventual: Eventually, data will be complete, but in the meantime it’s not necessarily replicated in order and there is not guarantee about how long it will take to have current data replicated. This is suitable for counters (like counters on social networks) for example.
  • Cosmos DB Account (choice of the API at account level, default = SQL API) > databases > containers (more or less equivalent to a table in SQL in terms of hierarchy of data) > items
  • Supported APIs to query data:
    – SQL = SQL-like language, storing as JSON, default configuration and default choice if not specific reason to choose others
    – Cassandra = migration of existing Cassandra DBs
    – MongoDB = migration of Mongo DBs
    – Gremlin (graph database) = for graph relationships between data (use case: social networks, product recommendations)
    – Azure Table = migration OData or LINQ
    I believe I only got questions about the SQL API in my exam, therefore I didn’t had more details about the specifics of the other APIs.
  • az cosmosdb create
    az cosmosdb sql database create
    az cosmosdb sql container create
  • Choice of the partition key (equivalent to the id of a SQL table in term of use, but does not have to be unique) -> criteria to distribute information across the partitions.
  • Cost of CosmosDB = Request Units (see documentation).
  • Change Feed: external to the DB engine : NOTIFICATIONS, enables to be notified for any insert and update on data; deletes are not directly supported, can leverage a soft-delete flag, a change will appear once in the change feed, reading data from the database will consume throughput, partition updates will be in order but between partitions there is no guarantee, not supported for Azure Table API.

Develop Solutions that use Blob Storage

Move items between storage accounts and containers, set and retrieve properties and metadata, SDK, storage policies, data archiving and retention.

  • There are three access tiers as described in the next three bullets. There are ways to automate the moving of the items to another tier with policies.
  • Hot tier: for frequent access, storage cost is higher but access cost is lower.
  • Cold tier: for data with less frequent access, storage cost is lower but access cost is higher. Data should be in cold tier if they weren’t accessed for 30 days at least.
  • Archive tier: offline storage, expected to take from 1 to 15 hours to retrieve access to the data. Data should be in archive tier if they weren’t accessed for 180 days.
  • Redundancy (not only applicable to Blob storage, it’s a very important concept in Cloud development). You should be able for example to say how many copies exist of the data based on the redundancy policy, or which tier ($, not storage tier) you need to use to be able to use one or another. I find it much easier to understand and remember with the schemas.
    LRS (Locally redundant storage) = default, 3 copies in a single data center.
    ZRS (Zone redundant storage) = 3 copies across 3 different Availability Zones (physically distanced locations) in a regions (so in 3 different data centers).
    GRS (Geo-redundant storage) = 3 copies in a data center of the region (LRS) + replicates to a second location in a different region (LRS too). Only for Standard General Purpose v2.
    GZRS = Geo + Zone. Only for Standard General Purpose v2.
  • Can store any type of blob, even unknown types (not limited to a list of file types).
  • Three types of blob
    – block (most used)
    – page (random access files, used primarily as the backing storage for the VHDs)
    – append (usually for logging)
  • Blobs live inside a Blob container, in a Storage account. Containers are flat (they don’t contain other containers).
  • Use the SDK (this is typically the kind of questions you can see at the exam when they want you to retrieve the actions to achieve something)
    1/ retrieve configuration at startup (connection to the storage account)
    2/ initialize the client (create the objects the app will manipulate)
    3/ make calls to the API through the client library
  • Access:
    – Shared keys = easiest to use, embedded in http Authorization header of every request, gives access to the whole storage account => only use with trusted in-house apps!
    – Shared Access Key = scoped and limited in time
    – User Delegation = use of Azure AD, most secured (only available for Blob and Queue storage in the Azure Storage services).

Monitor, Troubleshoot and Optimize Azure Solutions

Integrate Caching and Content Delivery (CDN)

Configure cache for Azure Redis cache.

  • You can add two types of cache in your applications: internal and external. Internal caching is built-in, the size is limited according to the tier and it’s not available in consumption tier. The external cache can be Azure Redis, external Redis service or another external caching service. The advantage is that it’s persistent and independent from the Azure service or app: you don’t loose data if you update the app. They both work based on policies.
  • Redis use cases: user session storage for distributed apps, database caching, content caching, distributed transactions, message broker.
    The available size depends on the tier, as well as the performance.
    Basic: for testing purpose, no SLA
    Standard: SLA, base level fail-over and replication
    Premium: Redis persistence, Redis clusters, passive geo-replication
    Entreprise: Redis search and other tools, active geo-replication
    Enterprise flash.
    All tiers support Azure private link.
  • Encryption: enabled by default, support of TLS 1.0, 1.1 (soon deprecated) and 1.2. It can be disabled.
  • Retention policies/how items are removed. You should be able to choose the best eviction policy for a use case.
    Scheduled deletion, based on a TTL (time to live) value (= volatile items).
    Manual deletion (by key).
    Eviction = what happens when there is memory pressure. There are different policies:
    – volatile-lru (least recently used) = default, removes items with a TTL and least recently used,
    – allkeys-lru = same but not only for volatile tiems,
    – volatile-random = any volatile item,
    – allkeys-random = any item,
    – volatile-ttl = volatile items with the shorted remaining TTL,
    – no eviction = nothing is removed but no new items can be cached.

Instrument Solutions to support Monitoring and Logging

Use Application Insights with apps and services, analyze and troubleshoot solutions with Azure Monitor, implement Application Insights web tests and alerts.

  • Monitoring = Metrics and logs streams aggregated to get insights, visualize, analyze, respond… Metrics are numerical values at a certain time, Logs are events.
  • Application Insights is a performance management tool (CPU, memory, exceptions in source code, request rates, response time…).
  • Availability tests (under Application Insights) are of three types: URL ping test, custom (SDK -> TrackAvailability() method), or multi-step (recording of a sequence of web requests).
  • Windows apps can send logs to File System or Blob storage, Linux apps only to File System.
  • Live log tracing: az webapp log trail
  • Windows apps in App Service automatically integrate with Application Insights, for Linux apps some code needs to be added.

Receive the future articles directly in your mailbox!

Leave a Reply

Your email address will not be published. Required fields are marked *

Don’t miss out!

Receive every new article in your mailbox automatically.


Skip to content