azure functions
291 TopicsAzure Database for MySQL bindings for Azure Functions (Public Preview)
The Azure Database for MySQL bindings for Azure Functions is now available in Public Preview! These newly released input and output bindings enable seamless integration with Azure Functions, allowing developers and organizations to build at-scale event-driven applications and serverless APIs that integrate with MySQL, using programming languages of their choice, including C#, Java, JavaScript, Python, and PowerShell. This integration significantly speeds up application development time by reducing the need for complex code to read and write from the database.Building a Cryptographically Secure Product Licensing System on Azure Functions and Cosmos DB
Introduction Building a robust software licensing system requires careful consideration of cryptographic security, attack vectors, and implementation details. While many licensing systems rely on simple API calls or basic key validation, creating a more secure system demands a deeper understanding of cryptographic principles and secure communication patterns. At its core, a secure licensing system must solve several fundamental challenges. How do we ensure that validation responses haven't been tampered with? How can we prevent replay attacks where valid responses are captured and reused? This article will present a robust solution to these challenges using cryptographic signatures, nonce validation, and secure key management. However, it's important to acknowledge an uncomfortable truth: no licensing system is truly unbreakable. Since license validation code must ultimately run on untrusted machines, a determined attacker could modify the software to bypass these checks entirely. Our goal, therefore, is to implement security measures that make bypass attempts impractical and time-consuming, while providing a frictionless experience for legitimate users. We'll focus on preventing the most common attack vectors — response tampering and replay attacks— while keeping our implementation clean and maintainable. The security approach Understanding the attack surface is crucial for building effective defenses. When a client application validates a license, it typically sends a request to a validation server and receives a response indicating whether the license is valid. Without proper security measures, attackers can exploit this process in several ways. They might intercept and modify the server's response, turning a "license invalid" message into "license valid." They could record a valid response and replay it later, bypassing the need for a real license key. Or they might reverse engineer the client's validation logic to understand and circumvent it. Our solution addresses these vulnerabilities through multiple layers of security. At its foundation, we use RSA public-key cryptography to sign all server responses. The validation server holds the private key and signs each response with it, while the client applications contain only the public key. This means that while clients can verify that a response came from our server, they cannot generate valid responses themselves. Even if an attacker intercepts and modifies a response, the signature verification will fail. To prevent replay attacks, we implement a nonce system. Each license check generates a unique random value (the nonce) that must be included in both the request and the signed response. The client verifies that the response contains the exact nonce it generated, ensuring that old responses cannot be reused. This effectively turns each license check into a unique cryptographic challenge that must be solved by the server. Architecture Our licensing system is built around a secure API that handles license validation requests, with the architecture designed to support our security requirements. The system consists of three main components: the validation API, the license storage, and the client function. Validation API: Implemented as an Azure Function, providing a serverless endpoint that handles license verification requests. We chose Azure Functions for several reasons: they offer excellent security features like managed identities for accessing other Azure services, built-in HTTPS enforcement, and automatic scaling based on demand. The validation endpoint is stateless, with each request containing all necessary information for validation, making it highly reliable and easy to scale. License storage: We use Azure Cosmos DB, which provides several advantages for our use case. First, it offers strong consistency guarantees, ensuring that license status changes are immediately reflected across all regions. Second, it includes built-in encryption at rest and in transit, adding an additional security layer to our sensitive license data. Third, its flexible schema allows us to easily store different types of licenses and associated metadata. Client function: I'll be writing my client function in JavaScript, though it is easily replicable in the language that is most relevant for your product. It handles nonce generation, request signing, and response verification, encapsulating these security details behind a clean, simple interface that application developers can easily integrate. Let's look at how this fits together with a flowchart illustrating the journey: Prerequisites To follow along this blog, you'll need the following: Azure subscription: You will need an active subscription to host the services we will use. Make sure you have appropriate permissions to create and manage resources. If you don't have one, you can sign up here: https://azure.microsoft.com/en-us/pricing/purchase-options/azure-account Visual Studio Code: For the development environment, install Visual Studio Code. Other IDEs are available, though we will be benefiting from the Azure Functions extension within this IDE. Download VS Code here: https://code.visualstudio.com/ VS Code Azure Functions extension (optional): There are many different ways to deploy to Azure Functions, but having one-click deploy functionality inside our IDE is extremely useful. To install on VS Code, head to Extensions > Search Azure Functions > Click Install. Building the solution Generating a public/private key pair As outlined, we'll be using RSA cryptography as a security measure. This means that we'll need a public and private key pair. There are many ways to generate these, but the one I will suggest does it all locally through the terminal, so there is no risk of exposing the private key. Run the following command in a local terminal: ssh-keygen -t rsa -b 2048 Then follow the wizard through. It will save the files at C:\Users\your-user\.ssh. You are expecting to find a id_rsa and a id_rsa.pub. Both of these can be opened via Notepad or other text editors. The .pub file type represents the public key, and the other is the private key. It is extremely important to keep the private key absolutely confidential, while there is no risk associated with exposing the public key. If the private key was exposed, then attackers could sign responses as if it were the server to circumvent license checks. Keep these keys handy as we will need them for signing and verifying signatures for this system. Deploying the resources Before we dive into the implementation, we need to set up our Azure infrastructure. Our system requires two main components: an Azure Function to host our validation endpoint and a Cosmos DB instance to store our license data. Let's start with Cosmos DB. From the Azure Portal, create a new Cosmos DB account. Select the Azure Cosmos DB for NoSQL API - while Cosmos DB supports multiple APIs, this option provides the simplicity and flexibility we need for our license storage. During creation, you can select the serverless pricing tier if you're expecting low traffic, or provisioned throughput for more predictable workloads. Make note of your endpoint URL and primary key - we'll need these later for our function configuration. Once your Cosmos DB account is created, create a new database named licensing and a container named licenses. For the partition key, use /id since we'll be using the license key itself as both the document ID and partition key. This setup ensures efficient lookups when validating licenses. Next, let's deploy our Azure Function. Create a new Function App from the portal, selecting Node.js as your runtime stack and the newest version available. Choose the Consumption (Serverless) plan type - this way, you only pay for actual usage and get automatic scaling. Make sure you're using the same region as your Cosmos DB to minimize latency. After the Function App is created, you'll need to configure your application settings. Navigate to the Configuration section and add the following settings: COSMOS_ENDPOINT: Your Cosmos DB endpoint URL COSMOS_KEY: Your Cosmos DB primary key PRIVATE_KEY: Your RSA private key for signing responses You may also want to consider setting up Application Insights for monitoring. It's particularly useful for tracking license validation patterns and detecting potential abuse attempts. With these resources deployed, we're ready to implement our validation system. The next sections will cover the actual code implementation for both the server and client components. Creating the validation API As discussed, the validation API will be deployed as an Azure Function. While this use case only requires one endpoint, the benefit is that it is scalable with the ability to add more endpoints as the scope widens. To create a Function project using the Azure Functions extension: Go to Azure tab > Click the Azure Functions icon > Click Create New Project > Choose a folder > Choose JavaScript > Choose Model V4 > Choose HTTP trigger > Provide a name (eg 'validation') > Click Open in new window. This will have created and opened a brand new Function project, with a HTTP trigger. This HTTP trigger will represent our validation endpoint and will be what the client calls to get a signed verdict on the license key provided by the user. Now let's write the code for this validation endpoint. The base trigger provided by the setup process should look something like this: const { app } = require('@azure/functions'); app.http('validate', { methods: ['GET', 'POST'], authLevel: 'anonymous', handler: async (request, context) => { context.log(`Http function processed request for url "${request.url}"`); const name = request.query.get('name') || await request.text() || 'world'; return { body: `Hello, ${name}!` }; } }); Let's make a few modifications to this base: Ensure that the only allowed HTTP method is POST, as there is no need to support both and we will make use of the request body allowed in POST requests. Clear everything inside that function to make way for our upcoming code. Optional: The first parameter inside the http function, where mine is validate, represents the route that will be used. With the current setup, mine would be example.com/validate. If you want to change this path, now is the time by adjusting that parameter. Now, let's work forward from this adjusted base: const { app } = require('@azure/functions'); app.http('validate', { methods: ['POST'], authLevel: 'anonymous', handler: async (request, context) => { } }); Our job now is handle an incoming validation request. As per the above flow chart, we must check the license key against actual existing keys stored in our Cosmos DB database, sign the response with the private key and then return it to the client. First, let's define the format that we expect in the incoming body and give ourselves access to that data. This validation endpoint should expect two data points: licenseKey and nonce. The following code will pull those two variables from the body, and return an error in the case that they have not been provided: try { const body = await request.json(); const { licenseKey, nonce } = body; if (!licenseKey) { return { status: 400, body: { error: 'Missing licenseKey' } }; } if (!nonce) { return { status: 400, body: { error: 'Missing nonce' } }; } ... } catch (error) { return { status: 400, body: { error: 'Invalid JSON in request body' } }; } Now that we know for sure a license key and nonce were provided, this is just a case of adding layers of checks. The first check to add is to ensure that the license key and nonce formats are valid. It makes sense to have a standardized format, such as a UUID, so we can reduce unnecessary database calls. I'll use UUID format. It's important to remember the desired format, so that we can (a) give frontend validation to the end user if they provide an wrongly formatted license key, and (b) ensure we generate a correctly formatted nonce. As for validating this, add the following regex as a const at the top of the class: const UUID_REGEX = /^[0-9a-f]{8}-[0-9a-f]{4}-4[0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$/i; Then add these two checks after our previous ones: if (!UUID_REGEX.test(licenseKey)) { return { status: 400, body: { error: 'Invalid license key format' } }; } if (!UUID_REGEX.test(nonce)) { return { status: 400, body: { error: 'Invalid nonce format' } }; } Now, we are validating the format of both the license key and nonce using a UUID regular expression pattern. The code uses the test method to check if each string matches the expected UUID format (like "123e4567-e89b-4d3c-8456-426614174000"). If either the license key or nonce doesn't match this pattern, the function returns an 400 Bad Request response with a specific error message indicating which field had the invalid format. This validation ensures that we only process requests where both the license key and nonce are properly formatted UUIDs. Next is the most important check: validating the license. As explained, I'm choosing to use Azure Cosmos DB for this, though this part may vary depending on your chosen data store. Before we add any code, let's first conceptualize a basic schema for our data. Something like this: { "type": "object", "required": [ "id", "redeemed" ], "properties": { "id": { "type": "string", "pattern": "^[0-9a-f]{8}-[0-9a-f]{4}-4[0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$" }, "redeemed": { "type": "boolean", "default": false } } } Cosmos DB is a great solution for this because we can easily add to the schema as the scope changes. For example, some potential ways to increase this functionality are: An expiry date A set of all activations, enforcing a cap Data to identify the license holder once redeemed Now that we know what data to search, let's add the code. First, we need to install the required dependency so we can use Cosmos DB: > npm install azure/cosmos Next, create a local.settings.json file if it doesn't already exist. Then, add two environment variables to the Values section, COSMOS_ENDPOINT and COSMOS_KEY: { "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "", "FUNCTIONS_WORKER_RUNTIME": "node", "COSMOS_ENDPOINT": "X", "COSMOS_KEY": "X" } } Be sure to populate those variables with the actual data from your deployed Cosmos DB resource. We'll be using these in a moment to programmatically connect to it. Add this import to the top of the function class: const { CosmosClient } = require('@azure/cosmos'); Now, let's add the code to connect to our Cosmos DB resource and read the data to validate the license: const cosmosClient = new CosmosClient({ endpoint: process.env.COSMOS_ENDPOINT, key: process.env.COSMOS_KEY }); const database = cosmosClient.database('licensing'); const container = database.container('licenses'); try { const { resource: license } = await container.item(licenseKey, licenseKey).read(); if (!license) { return { status: 404, body: { error: 'License not found' } }; } ... } catch (dbError) { if (dbError.code === 404) { return { status: 404, body: { error: 'License not found' } }; } return { status: 500, body: { error: 'Error validating license' } }; } The first part of the code creates our connection to Cosmos DB. We use the CosmosClient class from the SDK, providing it with our database endpoint and key from environment variables. Then we specify we want to use the licensing database and the licenses container within it where our license documents are stored. The core validation logic is surprisingly simple. We use the container's item method to look up the license directly using the provided license key. We use this key as both the item ID and partition key, which makes our lookups very efficient - it's like looking up a word in a dictionary rather than reading through every page. If no license is found, we return a 404 status code with a clear error message. We've also implemented proper error handling that distinguishes between a license not existing (404) and other potential database errors (500). This gives clients clear feedback about what went wrong while keeping our error messages secure - we don't expose internal system details that could be useful to attackers. Also, at this point we don't need to do any cryptographic signing, because there is no benefit to an attacker in replaying a rejected validation. Now, after confirming a license exists in our database, but before returning it to the client, we need to sign our response to prevent tampering. This is crucial for security - it ensures that responses can't be modified or forged, as they need a valid signature that can only be created with our private key and verified with our public key. The signing process uses Node's built-in crypto module for RSA signing. First, we load our private key from environment variables. Then, for any valid license response, we create a SHA-256 signature of the JSON data which we provide in the HTTP response. Add this import to the top of the class: const crypto = require('crypto'); Then, add your previously generated private key to our environment variables: { "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "", "FUNCTIONS_WORKER_RUNTIME": "node", "COSMOS_ENDPOINT": "X", "COSMOS_KEY": "X", "PRIVATE_KEY": "X" } } Then, after our Cosmos DB lookup, add the following code to sign a valid response: const respData = { valid: !license.redeemed, nonce, licenseKey }; const sign = crypto.createSign('SHA256'); const dataString = JSON.stringify(respData); sign.update(dataString); return { body: { ...respData, signature: sign.sign(process.env.PRIVATE_KEY, 'base64') } }; Now, every valid response includes both the license data and a cryptographic signature of that data. The client can then verify this signature using our public key to ensure the response hasn't been tampered with. Finally, after we have created respData, and set valid to the inverse of license.redeemed, let's ensure we set redeemed to true in the database, so that the key cannot be used for future requests. Between signing the JSON and returning the data, add the following code: await container.item(licenseKey, licenseKey).patch([ { op: 'replace', path: '/redeemed', value: true } ]); This ensures a license can only be validated once and marks it as redeemed upon first use. And that's it for the validation endpoint! We've built a secure HTTP endpoint that validates license keys through a series of steps. It first checks that the incoming request contains both a license key and nonce in the correct UUID format. Then it looks up the license in our Cosmos DB to verify its existence and redemption status. Finally, for valid licenses, it returns a cryptographically signed response that includes the license status and the original nonce. The signature ensures our responses can't be tampered with, while the nonce prevents replay attacks. Our error handling provides clear feedback without exposing sensitive system details, making the endpoint both secure and developer-friendly. The final code for this endpoint should look like this: const { app } = require('@azure/functions'); const { CosmosClient } = require('@azure/cosmos'); const crypto = require('crypto'); const UUID_REGEX = /^[0-9a-f]{8}-[0-9a-f]{4}-4[0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$/i; app.http('validate', { methods: ['POST'], authLevel: 'anonymous', handler: async (request, context) => { try { const body = await request.json(); const { licenseKey, nonce } = body; if (!licenseKey) { return { status: 400, body: { error: 'Missing licenseKey' } }; } if (!nonce) { return { status: 400, body: { error: 'Missing nonce' } }; } if (!UUID_REGEX.test(licenseKey)) { return { status: 400, body: { error: 'Invalid license key format' } }; } if (!UUID_REGEX.test(nonce)) { return { status: 400, body: { error: 'Invalid nonce format' } }; } const cosmosClient = new CosmosClient({ endpoint: process.env.COSMOS_ENDPOINT, key: process.env.COSMOS_KEY }); const database = cosmosClient.database('licensing'); const container = database.container('licenses'); try { const { resource: license } = await container.item(licenseKey, licenseKey).read(); if (!license) { return { status: 404, body: { error: 'License not found' } }; } const respData = { valid: !license.redeemed, nonce, licenseKey }; const sign = crypto.createSign('SHA256'); const dataString = JSON.stringify(respData); sign.update(dataString); await container.item(licenseKey, licenseKey).patch([ { op: 'replace', path: '/redeemed', value: true } ]); return { body: { ...respData, signature: sign.sign(process.env.PRIVATE_KEY, 'base64') } }; } catch (dbError) { if (dbError.code === 404) { return { status: 404, body: { error: 'License not found' } }; } return { status: 500, body: { error: 'Error validating license' } }; } } catch (error) { return { status: 400, body: { error: 'Invalid JSON in request body' } }; } } }); Adding the client function All of this server-side validation is pointless unless we properly verify responses on the client side. Our signed responses and nonce system are security features that only work if we actually validate them. Imagine if someone intercepted the server response and modified it to always return valid: true - without verification, our client would happily accept this fraudulent response. Similarly, without nonce checking, someone could capture a valid response and replay it later, effectively reusing a one-time license multiple times. Two critical checks need to happen when we get a response from our validation endpoint: We need to verify that the response's signature is valid using our public key. This ensures the response actually came from our server and wasn't tampered with in transit. Even the smallest modification to the response data would cause the signature verification to fail. We must confirm that the nonce in the response matches the one we generated for this specific request. The nonce is like a unique ticket - it should only be valid for one specific validation attempt. This prevents replay attacks where someone could capture a valid response and reuse it later. As outlined earlier, this client check could be written with any programming language in any environment that allows a HTTP request. For consistency, I'll do mine in JavaScript, though know that there would be no issues in porting this over to Java, Kotlin, C++, C#, Go, etc. This function would then be implemented into your product and triggered when the user provides a license key that needs validating. Let's start with a base function, with crypto imported ready for the signature verification and the licenseKey parameter provided by the user: const crypto = require('crypto'); async function validateLicense(licenseKey) { } Before we make our request to the validation server, we need to generate a unique nonce that we'll use to prevent replay attacks. Using the crypto module we just imported, we can generate a UUID for this purpose: const nonce = crypto.randomUUID(); Now we can make our request to the validation endpoint: try { const response = await fetch('https://your-function-url/validate', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ licenseKey, nonce }) }); if (!response.ok) { const errorData = await response.json(); throw new Error(errorData.error || 'License validation failed'); } ... } catch (error) { console.error('License validation failed:', error.message); return false; } With our nonce generated, we make our HTTP request to the validation server. We send a POST request with our license key and nonce in the request body, making sure to set the Content-Type header to indicate we're sending JSON data. The response handling is thorough but straightforward: if we get anything other than a successful response (indicated by response.ok being false), we attempt to parse the error message from the response body and throw an error. For successful responses, we parse the JSON data which will contain our validation result, along with the nonce and cryptographic signature we'll need to verify. This gives us a clean way to handle both successful validations and various error cases (like invalid license keys or server errors) while ensuring we have proper data to perform our security checks. Now, let's check the nonce: if (data.nonce !== nonce) { return false; } This simple but crucial comparison verifies that the response we received was actually generated for our specific request. By checking data.nonce !== nonce, we ensure the nonce returned by the server exactly matches the one we generated. If there's any mismatch, we return false immediately - we don't even bother checking the signature or license status because a nonce mismatch indicates either a replay attack (someone trying to reuse an old valid response) or response tampering. Think of it like a ticket number at a deli counter - if you give them ticket #45 but they call out #46, something's wrong and you need to stop right there. Now we perform our second and most crucial security check: cryptographic signature verification. Using Node's crypto module, we create a SHA256 verifier and reconstruct the exact same data structure that the server signed - this includes the validation result, nonce, and license key in a specific order. We verify this data against the signature provided in the response using our public key. The server signed this data with its private key, and only the matching public key can verify it - any tampering with the response data would cause this verification to fail. If the signature is valid, we've confirmed two things: the response definitely came from our validation server (not an impersonator) and the data hasn't been modified in transit. Only then do we trust and return the validation result. Let's add the code after the nonce check which checks the signature: const verifier = crypto.createVerify('SHA256'); verifier.update(JSON.stringify({ valid: data.valid, nonce: data.nonce, licenseKey: data.licenseKey })); const isSignatureValid = verifier.verify( process.env.PUBLIC_KEY, data.signature, 'base64' ); if (!isSignatureValid) { return false; } return data.valid; The final code for this function should look like this: const crypto = require('crypto'); async function validateLicense(licenseKey) { const nonce = crypto.randomUUID(); try { const response = await fetch('https://your-function-url/validate', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ licenseKey, nonce }) }); if (!response.ok) { const errorData = await response.json(); throw new Error(errorData.error || 'License validation failed'); } const data = await response.json(); if (data.nonce !== nonce) { return false; } const verifier = crypto.createVerify('SHA256'); verifier.update(JSON.stringify({ valid: data.valid, nonce: data.nonce, licenseKey: data.licenseKey })); const isSignatureValid = verifier.verify( process.env.PUBLIC_KEY, data.signature, 'base64' ); if (!isSignatureValid) { return false; } return data.valid; } catch (error) { console.error('License validation failed:', error.message); return false; } } With our validation function complete, we can now reliably check license keys from any part of our application. The function is designed to be straightforward to use while handling all the security checks under the hood - just pass in a license key and await the result. It returns a simple boolean: true if the license is valid and verified, false for any kind of failure (invalid license, network errors, security check failures). Here's how you might use it: const isValid = await validateLicense('123e4567-e89b-4d3c-8456-426614174000'); if (isValid) { // License is valid - enable features, start your app, etc. } else { // License is invalid - show error, disable features, etc. } Testing Let's do a couple tests running this function. Firstly, I'll do a valid license key that I've inserted into the database. The document looks like this: { "id": "123e4567-e89b-4d3c-8456-426614174000", "redeemed": false } Here's the response: { "valid": true, "nonce": "987fcdeb-51a2-4321-9b54-326614174000", "licenseKey": "123e4567-e89b-4d3c-8456-426614174000", "signature": "XkxZWR0eHpKTFE3VVhZT3JUEtYeXNJWUZ6SWJoTUtmMX0JhVXBHK1ZzN2lZYzdGSnJ6SEJa1VOCnBrWkhDU0xGS1ZFTDmFZN1BUeUlHRzl1V0tJPT0=" } Running the function returns true. Which means that the above response has passed the nonce and signature validation too! I tested with a license key which does not exist in the database, 0bd84e7b-d91e-47e8-81b8-f39a5c1f8c72, and the result was that the function returned false. Summary Building a secure license validation system is a delicate balance between security and usability. The implementation we've created provides strong security guarantees through cryptographic signatures and nonce validation, while remaining straightforward to integrate into any application. Let's recap the key security features we've implemented: UUID-based license keys and nonces for standardized formatting Server-side validation using Azure Cosmos DB for efficient license lookups Response signing using RSA public-key cryptography Nonce verification to prevent replay attacks Comprehensive error handling with secure error messages While this system provides robust protection against common attack vectors like response tampering and replay attacks, it's important to remember that no licensing system is completely unbreakable. Since validation code must run on untrusted machines, a determined attacker could potentially modify the software to bypass these checks entirely. Our goal is to make bypass attempts impractical and time-consuming while providing a frictionless experience for legitimate users. The architecture we've chosen - using Azure Functions and Cosmos DB - gives us plenty of room to grow. Some potential enhancements could include: Rate limiting to prevent brute force attempts IP-based activation limits License expiration dates Feature-based licensing tiers Usage analytics and reporting The modular nature of our implementation means adding these features would require minimal changes to the core validation logic. Our schema-less database choice means we can evolve our license data structure without disrupting existing functionality. Remember to store your private key securely and never expose it in client-side code. The public key used for verification can be distributed with your client applications, but the private key should be treated as a critical secret and stored securely in your Azure configuration. This concludes our journey through building a secure license validation system. While there's always room for additional security measures and features, this implementation provides a solid foundation that can be built upon as your needs evolve.741Views0likes0CommentsAzure Functions Flex Consumption is now generally available
We are excited to announce that Azure Functions Flex Consumption is now generally available. This hosting plan provides the highest performance for Azure Functions with concurrency-based scaling for both HTTP and non-HTTP triggers, scale from zero to 1000 instances, and no cold start with the Always Ready feature. Flex Consumption also allows you to enjoy seamless integration with your virtual network at no extra cost, ensuring secure and private communication, with no considerable impact to your app’s scale out performance. Learn more about How to achieve high HTTP scale with Azure Functions Flex Consumption, the engineering innovation behind it, and project Legion, the platform behind Flex Consumption. In addition to the fast scaling based on per-instance concurrency, you can choose between 2048MB and 4096MB instance sizes. As the function app receives requests it will automatically scale from zero to as many instances of that instance size as needed based on per instance concurrency, and back to zero for cost efficiency when there’s no more requests to process. You can also take advantage of the built-in integration with Azure Load Testing and the Performance Optimizer to optimize your HTTP functions for performance and cost. Flex Consumption is now generally available for .NET 8 on the isolated worker model, Java 11, Java 17, Node 20, PowerShell 7.4, Python 3.10, and Python 3.11 in Australia East, East Asia, East US, North Europe, Southeast Asia, Sweden Central, UK South, and West US 2, and in preview in East US 2, South Central US, and West US 3. By December 9th 2024, .NET 9 will also generally available in Australia East, East Asia, East US, North Europe, Southeast Asia, Sweden Central, and UK South. Besides the currently supported DevOps and dev tools like VS Code, Java tooling, Azure Pipeline tasks, and GitHub Actions, you can now use the latest Visual Studio 2022 v17.12 update or newer to create and publish to Flex Consumption apps. The Flex Consumption plan offers competitive pricing with flexible options to fit your needs, with GA pricing taking effect on December 1, 2024. For detailed pricing information, please refer to the pricing page. Customer adoption and scenarios We have been working with several internal and external customers during the public preview period, with hundreds of external customers actively using Flex Consumption. “ At Yggdrasil, we immediately started adopting Flex Consumption functions when they went into public preview, as they offer the combination of cost-efficiency, scalability, and security features we need to run our company. We already have 100 Flex Consumption functions running in production, and expect to move at least another 50 functions, now that the product has reached GA. We migrated to Flex from consumption to have VNet integration and private endpoints. – Andreas Strandfelt, Partner & Senior Cloud Specialist at Yggdrasil Commodities ApS “ What really matters to us is that the app scales up and down based on demand. Azure Functions Flex Consumption is very appealing to us because of how it dynamically scales based on the number of messages that are queued up in Azure Event Hubs – Stephan Miehe, GitHub Senior Director. Public case study “ Microsoft AI We had a need to process a large queue, representing a significant volume of data with inconsistent availability. Azure Functions Flex Consumption dramatically simplified the code footprint needed to perform this embarrassingly parallel task and helped us complete it in a much shorter timeframe that we had expected. – Craig Presti, Office of the CTO, Microsoft AI project “ Going Forward In the upcoming months we look forward to rolling out even more features to Flex Consumption, including: Availability zones: Enabling availability zones will be possible for new and existing Flex Consumption apps 512 MB instance size: We will introduce a new, smaller instance size for more granular control Enhanced tooling support: PowerShell modules and Terraform AzureRM support New language versions: Support for the latest language versions like Node 22, Python 3.12, and Java 21 Expanded regional availability: The number of regions will continue to expand in early 2025 with UAE North, Centra US, West US 3, South Central US, East US 2, West US, Canada Central, France Central, and Norway East coming first Metrics support: Full Azure Monitor metrics support for Flex Consumption apps Deployment improvements: Zero-downtime deployment to ensure no disruption to running executions More triggers: Kafka and SQL triggers Closing features: Addressing the limitations identified in Considerations. Please let us know which ones are most important to you! Get Started! Explore our reference samples, quickstarts, and comprehensive documentation to get started with the Azure Functions Flex Consumption hosting plan today!3.9KViews1like14CommentsGetting error message "Invoking Azure function failed with HttpStatusCode - Unauthorized"
I have a synapse pipeline which contains a single component, an azure function activity component. The objective is to send a test JSON payload to a private endpoint using POST call. The azure function activity is configured to use the POST method and an azure function linked service has also been specified in the activity. We have a function app in premium plan, the linked service is pointing to the function app. Inside the function app, we have function which contains the main python code that makes the request. Function app stack is python and the function created inside is an HTTP trigger using V2 programming model and the authorization level selected is Function. When I debug the pipeline I am getting the error message Invoking Azure function failed with HttpStatusCode - Unauthorized. Please support in resolving this. Thanks180Views0likes1CommentUnlock New AI and Cloud Potential with .NET 9 & Azure: Faster, Smarter, and Built for the Future
.NET 9, now available to developers, marks a significant milestone in the evolution of the .NET platform, pushing the boundaries of performance, cloud-native development, and AI integration. This release, shaped by contributions from over 9,000 community members worldwide, introduces thousands of improvements that set the stage for the future of application development. With seamless integration with Azure and a focus on cloud-native development and AI capabilities, .NET 9 empowers developers to build scalable, intelligent applications with unprecedented ease. Expanding Azure PaaS Support for .NET 9 With the release of .NET 9, a comprehensive range of Azure Platform as a Service (PaaS) offerings now fully support the platform’s new capabilities, including the latest .NET SDK for any Azure developer. This extensive support allows developers to build, deploy, and scale .NET 9 applications with optimal performance and adaptability on Azure. Additionally, developers can access a wealth of architecture references and sample solutions to guide them in creating high-performance .NET 9 applications on Azure’s powerful cloud services: Azure App Service: Run, manage, and scale .NET 9 web applications efficiently. Check out this blog to learn more about what's new in Azure App Service. Azure Functions: Leverage serverless computing to build event-driven .NET 9 applications with improved runtime capabilities. Azure Container Apps: Deploy microservices and containerized .NET 9 workloads with integrated observability. Azure Kubernetes Service (AKS): Run .NET 9 applications in a managed Kubernetes environment with expanded ARM64 support. Azure AI Services and Azure OpenAI Services: Integrate advanced AI and OpenAI capabilities directly into your .NET 9 applications. Azure API Management, Azure Logic Apps, Azure Cognitive Services, and Azure SignalR Service: Ensure seamless integration and scaling for .NET 9 solutions. These services provide developers with a robust platform to build high-performance, scalable, and cloud-native applications while leveraging Azure’s optimized environment for .NET. Streamlined Cloud-Native Development with .NET Aspire .NET Aspire is a game-changer for cloud-native applications, enabling developers to build distributed, production-ready solutions efficiently. Available in preview with .NET 9, Aspire streamlines app development, with cloud efficiency and observability at its core. The latest updates in Aspire include secure defaults, Azure Functions support, and enhanced container management. Key capabilities include: Optimized Azure Integrations: Aspire works seamlessly with Azure, enabling fast deployments, automated scaling, and consistent management of cloud-native applications. Easier Deployments to Azure Container Apps: Designed for containerized environments, .NET Aspire integrates with Azure Container Apps (ACA) to simplify the deployment process. Using the Azure Developer CLI (azd), developers can quickly provision and deploy .NET Aspire projects to ACA, with built-in support for Redis caching, application logging, and scalability. Built-In Observability: A real-time dashboard provides insights into logs, distributed traces, and metrics, enabling local and production monitoring with Azure Monitor. With these capabilities, .NET Aspire allows developers to deploy microservices and containerized applications effortlessly on ACA, streamlining the path from development to production in a fully managed, serverless environment. Integrating AI into .NET: A Seamless Experience In our ongoing effort to empower developers, we’ve made integrating AI into .NET applications simpler than ever. Our strategic partnerships, including collaborations with OpenAI, LlamaIndex, and Qdrant, have enriched the AI ecosystem and strengthened .NET’s capabilities. This year alone, usage of Azure OpenAI services has surged to nearly a billion API calls per month, illustrating the growing impact of AI-powered .NET applications. Real-World AI Solutions with .NET: .NET has been pivotal in driving AI innovations. From internal teams like Microsoft Copilot creating AI experiences with .NET Aspire to tools like GitHub Copilot, developed with .NET to enhance productivity in Visual Studio and VS Code, the platform showcases AI at its best. KPMG Clara is a prime example, developed to enhance audit quality and efficiency for 95,000 auditors worldwide. By leveraging .NET and scaling securely on Azure, KPMG implemented robust AI features aligned with strict industry standards, underscoring .NET and Azure as the backbone for high-performing, scalable AI solutions. Performance Enhancements in .NET 9: Raising the Bar for Azure Workloads .NET 9 introduces substantial performance upgrades with over 7,500 merged pull requests focused on speed and efficiency, ensuring .NET 9 applications run optimally on Azure. These improvements contribute to reduced cloud costs and provide a high-performance experience across Windows, Linux, and macOS. To see how significant these performance gains can be for cloud services, take a look at what past .NET upgrades achieved for Microsoft’s high-scale internal services: Bing achieved a major reduction in startup times, enhanced efficiency, and decreased latency across its high-performance search workflows. Microsoft Teams improved efficiency by 50%, reduced latency by 30–45%, and achieved up to 100% gains in CPU utilization for key services, resulting in faster user interactions. Microsoft Copilot and other AI-powered applications benefited from optimized runtime performance, enabling scalable, high-quality experiences for users. Upgrading to the latest .NET version offers similar benefits for cloud apps, optimizing both performance and cost-efficiency. For more information on updating your applications, check out the .NET Upgrade Assistant. For additional details on ASP.NET Core, .NET MAUI, NuGet, and more enhancements across the .NET platform, check out the full Announcing .NET 9 blog post. Conclusion: Your Path to the Future with .NET 9 and Azure .NET 9 isn’t just an upgrade—it’s a leap forward, combining cutting-edge AI integration, cloud-native development, and unparalleled performance. Paired with Azure’s scalability, these advancements provide a trusted, high-performance foundation for modern applications. Get started by downloading .NET 9 and exploring its features. Leverage .NET Aspire for streamlined cloud-native development, deploy scalable apps with Azure, and embrace new productivity enhancements to build for the future. For additional insights on ASP.NET, .NET MAUI, NuGet, and more, check out the full Announcing .NET 9 blog post. Explore the future of cloud-native and AI development with .NET 9 and Azure—your toolkit for creating the next generation of intelligent applications.8.5KViews2likes1CommentKarpenter: Run your Workloads upto 80% Off using Spot with AKS
Using Spot Node with Karpenter Add toleration in Sample AKS-Vote application i.e. "karpenter.sh/disruption:NoSchedule" which comes as default in spot node when provision with AKS Cluster Please refer my github repo for Application yaml and sample nodepool config Scale down your application replicas to allow Karpenter to evict existing on-demand nodes and replace them with Spot nodes Deploy and scale vote application replicas so that karpenter spins up spot nodes based on nodepool configuration and schedule pods after toleration validation on spot Karpenter spins up new spot nodes and Nominate that node for sceduling sample vote-app Configuring Multiple NodePools To configure separate NodePools for Spot and On-Demand capacity: Spot nodes configure with E series VM "Standard E2s_v5" and OnDemand with D series VM as "Standard_D4s_v5" In multi-nodepool scenario each nodepool needs to be configured with 'Weight' attribute, nodepool with highest weight would be priotized over another, here we have Spot node with weight:100 and ondemand with weight:606.7KViews2likes2CommentsEnabling and disabling forwarding rule
Hello, We need to turn on a mail forwarding rule on a single mailbox, within 365. We looked at using a Azure Function App and copilot got us most of the way there but need some help with a 400 error. Failed to enable rule: The remote server returned an error: (400) Bad Request. The API authenticates and has the Mail.ReadWrite and Mail.Send and seems to be happy there. Is there a reason why this is giving a 400 error as all the details (I thought) were in order. # Azure AD App details $clientId = "your-client-id" $clientSecret = "your-client-secret" $tenantId = "your-tenant-id" # Function parameters $mailbox = "email address removed for privacy reasons" $ruleId = "086b4cfe-b18a-4ca0-b8a6-c0cc13ab963e3208025663109857281" # Provided rule ID without backslash # Get OAuth token $body = @{ client_id = $clientId client_secret = $clientSecret scope = "https://graph.microsoft.com/.default" grant_type = "client_credentials" } try { $response = Invoke-RestMethod -Uri "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" -Method Post -ContentType "application/x-www-form-urlencoded" -Body $body $token = $response.access_token Write-Output "Token acquired successfully." } catch { Write-Error "Failed to get OAuth token: $_" return } # Enable the existing rule $headers = @{ Authorization = "Bearer $token" ContentType = "application/json" } $body = @{ isEnabled = $true } try { $jsonBody = $body | ConvertTo-Json Write-Output "JSON Body: $jsonBody" $response = Invoke-RestMethod -Uri "https://graph.microsoft.com/v1.0/users/$mailbox/mailFolders/inbox/messageRules/$ruleId" -Method Patch -Headers $headers -Body $jsonBody Write-Output "Rule enabled successfully: $($response | ConvertTo-Json)" } catch { Write-Error "Failed to enable rule: $_" Write-Output "Response Status Code: $($_.Exception.Response.StatusCode)" Write-Output "Response Status Description: $($_.Exception.Response.StatusDescription)" if ($_.Exception.Response -ne $null) { $responseContent = $_.Exception.Response.Content.ReadAsStringAsync().Result Write-Output "Response Content: $responseContent" } else { Write-Output "No response content available." } } # Return response Write-Output "Script completed."Solved69Views1like3Comments