wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on 11 de janeiro de 2018 08:49

Toon Vanhoutte by Toon Vanhoutte

BizTalk Server offers a great feature that both inbound (receive ports) and outbound maps (send ports) can be executed in dynamic fashion, depending on the message type of the message. This message type is defined as rootNodeNamespace#rootNodeName. Below, you can find an example of a receive port configured with several inbound maps.

When migrating parts of BizTalk solutions to Azure Logic Apps, it's really handy to reuse this pattern. This blog post explains how you can do this.

Configure the Integration Account

In this step, we will prepare the prerequisites to build this functionality.

  • Create an integration account.
  • Upload the required XSLT maps

  • Link your Logic App to the Integration Account, via the Workflow Settings:

Create the Logic App

It's time to create a Logic App that uses this functionality. In this blog, I've opted for a request/response pattern, which allows easy testing through Postman.

 

  • The first action initializes an Array variable. The variable contains a list of all expected message types and their corresponding transformation that must be executed.

 

  • The second action filter the array. It selects the object that matches the message type of the incoming message. The message type is determined through the following expression: xpath(xml(body('Transform_XML')), 'concat(namespace-uri(/*), ''#'', local-name(/*))')

 

  • The last action executes the mapping, of which the name is determined at runtime via this expression:body('Select_inbound_map')[0].Transform

Test the Logic App

Let's use Postman to test the Logic App and verify that the correct mapping is executed in a dynamic way.

 

Conclusion

If you combine the right Logic App actions, you can quite easily give your workflows some dynamic behaviour. In case you would like to externalize the configuration, that links message types and transforms, you could leverage for example Azure Blob Storage.

Categories: Azure
written by: Toon Vanhoutte

Posted on 5 de janeiro de 2018 09:04

Tom Kerkhove by Tom Kerkhove

Things change, and so does the cloud. New services are being added and integration between services is being improved, but services also become deprecated. We need to embrace change and design for it.

Our industry has shifted quite a lot in the past recent years where we moved from spinning up our own servers on-premises to run our software by hosting more and more in the cloud.

This brings a lot of benefits where agility is one of them. By moving away from yearly releases to monthly or weekly releases product teams can get new features and services faster out of the door to receive feedback more easily. This is very good to quickly evolve your product and see how your consumers are using it allows you to adapt or release bug fixes more quickly.

This is exactly what Microsoft Azure and other cloud platforms are doing. Every blink of an eye they release new features! Keeping up with all latest and greatest is sometimes like drinking from a water hose, you can manage to do it but not for long! Some might say that things are even going too fast but that's a topic on its own.

The key learning of the journey I've seen so far is that things change, and you'd better be prepared.

Introduction of new services

Over time, ecosystems can expand by the addition of new services that can change the way you think about the systems that you are building or fill in the gaps that you now need to work around.

Azure Event Grid is one the newest services in Microsoft Azure that leverage unique capability - Support for sending notifications in event-driven architectures. This ties in with the recent "Serverless" trend where everything needs to be event-driven and only care about the logic that needs to run, not how it's running. Event Grid was the last piece of the puzzle to go fully event-driven which can make us question our current approach to existing systems.

Better integration between services

Another aspect of change is that services are easier to integrate with each other over time. This allows you to achieve certain aspects without having to do the heavy lifting yourself.

An example of this is Azure Logic Apps & Azure Table Storage. If you wanted to use these in the past, you had to build & deploy your own custom Table Storage API App for that because it was not there out-of-the-box. Later on, they added a connector to the connector portfolio that allows you have the same experience without having to do anything and allowed you to switch verify easily.

Azure AD Managed Service Identity (MSI) is another good example which makes authentication with Azure AD very easy, simplifying authentication with Azure Key Vault. No need to worry about storing authentication information on your compute nodes anymore, MSI will handle it for you! And while this makes it easier for you, it's also more secure since you don't have the additional risk of storing the information somewhere, it's now being handled by the ecosystem and not your problem anymore! It's not about completely removing security risks, it's about limiting them.

You've got to move it, move it.

But then comes the day that one of the services on which you depend is no longer being invested in any more or even worse, being deprecated. Next thing you know, you need to migrate to another (newer) service or, if you're very lucky, there is no migration path.

This is not a walk in the park because it comes with a lot of important questions:

  • Does it have the same feature set?
    • If not, do we need to migrate it to multiple services or look at using an offering from another vendor/community?
  • What is the new pricing story? Will it be more expensive?
  • What is the current status of the newer service? Is it stable enough (yet)?
  • How about the protocols that are being used for both the old services as for the new alternatives?
    • Does it support the same protocols or are they proprietary?
    • Can we benefit from using open standards instead?
    • Does it bring any (new) vendor lock-ins?
  • Do I have to revise my ALM story or does it follow a similar approach?
  • And many more

Unfortunately, 2017 was the year where Azure Access Control Service (ACS) was officially being deprecated and existing customers have until November 7, 2018, before the service is being shut down. This might sound like a long time before it goes away but it takes a certain amount of effort to migrate off of a service onto a new one because you need to evaluate alternatives, plan for the migration, implement changes, re-test everything and push it to the masses so it's fair to say that it takes a certain amount of time.

ACS, in particular, is an interesting case because they provide a decent migration guide and the blog post gives you guidance as well, but that does not mean that you're off the hook. While you can migrate to Azure AD or Azure AD B2C, these alternatives do not support all authentication protocols that ACS did. Luckily there are also communities that have (OSS) technology available such as IdentityServer but that's not a guarantee that it has the same capabilities than what you are migrating from.

Is ACS an exception? Certainly not. Remember Azure Remote App? Gone. Azure Power BI Embedded? Deprecated and you should migrate to Power BI.

This is far from a rant and building systems are not hard, it's maintaining them. And at a certain point in time, you need to make hard decisions which unfortunately sometimes impacts customers.

More information on Power BI Embedded can be found here as well.

Deprecated? No, but you'd better use our vNext

Next to deprecation, some services are being improved by launching a brand new major version of the service itself that has an improved version of its precursor, a service upgrade if you will. This means that the service is still around, but that it has changed so dramatically that you will need to migrate as well.

Azure Data Factory is a good example of this, which I've written about recently, where you can still use Azure Data Factory v1 but v2 has arrived and will be the way forward. This means that you can still use the service that you like but have to migrate since there are potentially a few breaking changes because the infrastructure supporting it has changed.

You can see a service upgrade a bit like a light version of a deprecated service - Your current version is going away, but you can still stick around and use the new version. If you're lucky, you don't need to migrate or use one of the provided migration tools to do this for you. However, you still need to make sure that everything still works and that you make the switch, but you get new features for doing that.

Embracing Change

There are a variety of ways that can impact the architecture of your application and we need to design for change because we will need it.

Another interesting aspect from the ACS lifecycle is that, if you've been around for a while, you might have noticed that the service didn't get any investments in the last couple of years, but neither did Azure Cloud Services. Do we need to panic? No. But it's safe to say that Cloud Services are going away in the future as well and it's always good to look around and see if there are any alternatives. Do we need to switch as soon as possible? No.

Are only old services going away? No. Thanks to Agile it is very easy to deliver an MVP and see what the feedback is, but if nobody likes it or no business need gets fulfilled it is probably going away. A good example for this is Azure BizTalk Services that was around for a year but was killed particularly fast because nobody really liked it and Azure Logic Apps is the successor of that, which people like more.

It is crucial to find a balance between cutting-edge technology & battle-tested services. Every service brings something to the table, but is it really what you need or do you just want to use a new shiny technology/service? Compare all candidates and see what benefits & trade-offs they have and use the right service for the job.

I'm proud to say that I'm still using Azure Cloud Services, despite the lack of investment by Microsoft. Why? Because it gives me what I need and there is no alternative that is similar to what Cloud Services gives for our scenario. However, this does not mean that we will use it forever and we keep an eye open for the development of other services.

When new technologies or services arise, it's always good to have a look and see what the power of it is via a small spike or POC but be cautious before you integrate it in your application. But is it worth to switch (already)? Here are a few questions you could ask yourself:

  • What does it bring over the current solution?
  • What is the performance of it?
  • What is the risk/impact of it?
  • What is the monitoring story around it?
  • What is the security story around it?
  • Can we do automated deployments?

Embrace change. Make sure that you can easily change things in your architecture without your customers knowing about them.

How? Well it always depends on your application. To give you one example - Make sure that your public API infrastructure is decoupled from your internal infrastructure and that you use DNS for everything. Azure API Management is a perfect fit for this because it decouples the consumers from the backend part giving you control over things like advanced routing, easy security, etc regardless of your physical backend. If you decide to decompose your API that is hosted on a Web App into multiple microservices running in Kubernetes or Azure Functions, you can very easily do that behind the scenes while your customers are still calling the same operations on your API Proxy.

Certainly, do this if you are working with webhooks that are being called by 3rd parties. You can ask consumers to call a new operation, which you should avoid, but with webhook registrations, you cannot. One year ago we decided that all webhooks should be routed through Azure API Management so that we benefit from the routing aspect, but also allow us to still secure our physical API since webhooks not always support security as it should.

Conclusion

This article is far from a rant, but more to create awareness that things are moving fast and we need to find a balance between cutting-edge technology & battle-tested services.

Use a change-aware mindset when designing your architecture, because you will need it. Think about the things that you depend on, but also be aware that you can only do this to a certain degree.

In my example above I talked about using Azure API Management as a customer-facing endpoint for your API infrastructure. Great! But what if that one goes away? Then you'll have to migrate everything because you can only be as cautious as you can to a certain degree because in the end, you'll need to depend on something.

Thanks for reading,

Tom.

Posted on 10 de dezembro de 2017 13:11

Tom Kerkhove by Tom Kerkhove

Azure Logic Apps & Azure Data Factory are both orchestrators, but how do they differ from each other? Well, combining both is the sweet spot.

In my previous post, we've went through the new features of Azure Data Factory 2.0 on how it leverages more triggers and allows you to build data pipelines to orchestrate your data integration both in the cloud as on-premises.

It is a serverless orchestrator where you can create pipelines that represent a workflow. In these pipelines you have sequences of activities, or steps, and have granular control on what to do if something fails or succeeds, etc. Every pipeline represents a business process that needs to run frequently or on demand.

Doesn't this sound like Azure Logic Apps? Well you are right... but to a certain degree!

Logic Apps & Data Factory

While they both leverage a serverless orchestration I think they both shine in their own perspective: Azure Logic Apps is perfect for application integration while Data Factory is excellent for doing data integration.

Azure Data Factory allows you to interact with your data at scale by stitching together all your data stores together and build a data-centric platform inside your company ranging from copying data from one place to another, transforming data sets, loading data with bulk imports and much more. It is fully optimized for data processing and it ensures you don't need to worry.

On the other hand, I see Azure Logic Apps being more focused on application integration where you can use it to unify all your internal & external services to create a unified infrastructure on which you can run your business processes and improve your company as a whole. The difference here is that it's not focussed on the data itself but more the integration and connectivity with all these systems and how they communicate.

As with every Azure services it's not about which service is better than the other, it's about using the correct tool to get the job done.

The general rule here is that if it's a data-centric workflow, Data Factory is probably your best bet. However, I think that combining Azure Logic Apps with Azure Data Factory is really the sweet spot.

By chaining these two orchestrators together you can create fully automated "pipelines" that join forces to achieve an end goal.

You can for example use an Azure Logic App that is in charge of data preparation by making data sets available in a specific data source which triggers Azure Data Factory for further processing. The triggered pipeline then picks up the data in the data source, processes it and when it's finished it triggers another Azure Logic App to act on the gained business insights. This could be a Logic App that interprets the data, creates a summary of today's sales and sendsout an email to the CEO so that he is aware of how his company is doing.

Azure Data Factory & Logic Apps, better together.

Thanks for reading,

Tom.

Categories: Architecture, Azure, BizTalk
written by: Tom Kerkhove

Posted on 10 de dezembro de 2017 13:08

Tom Kerkhove by Tom Kerkhove

Microsoft announced Azure Data Factory v2 at Ignite bringing that enables more data integration scenarios and brings SSIS into the cloud.

Azure Data Factory is one of those services in Azure that is really great but that doesn't get the attention that it deserves.

It is a hybrid data integration service in Azure that allows you to create, manage & operate data pipelines in Azure. Basically, it is a serverless orchestrator that allows you to create data pipelines to either move, transform, load data; a fully managed Extract, Transform, Load (ETL) & Extract, Load, Transform (ELT) service if you will.

I've been using Data Factory a lot in the past year and it makes it very easy to create & manage data flows in the cloud. It comes with a wonderful monitoring experience which could be an example for other services like Azure Functions & Azure Event Grid where this would be beneficial.

However, Azure Data Factory was not perfect.

The drawbacks of Azure Data Factory

There were a couple of drawbacks & missing features when using the service:

  • Only Supports Data Slicing - The only way to schedule your data pipeline was to run every x minutes, hours or days and process the data that was in that time slice. You couldn't trigger it on demand or whatsoever.
  • No Granular Scheduling Control - No granular control on when the pipeline should be triggered in terms of calendar scheduling ie. only run the pipeline during the weekend.
  • Limited Operational Experience - Besides the Monitor-portal, the monitoring experience was very limited. It only supported sending email notifications that were triggered under certain criteria while it did not provide built-in metrics nor integration with Azure Monitor.
  • JSON All The Things - The authoring experience was limited to writing everything in JSON. However, there was also support for Visual Studio, but even there it was only to edit JSON files.
  • Learning Curve - The learning curve for new people was pretty steep. This is primarily because it was using mainly JSON and I think having a code-free experience here would make things a lot easier.

Last but not least, the most frightening factor was radio silence. And for a good reason...

Enter Azure Data Factory 2.0.

Azure Data Factory 2.0

During Ignite, Microsoft announced Azure Data Factory 2.0 that is now in public preview.

Azure Data Factory 2.0 takes data integration to the next level and comes with a variety of triggers, integration with SSIS on-prem and in Azure, integration with Azure Monitor, control flow branching and much more!

Let's have a look at a couple of new features.

Introduction of Integration Runtime

A new addition is the concept of an Integration Runtime (IR). It represents a compute infrastructure component that will be used by an Azure Data Factory pipeline will use to offer integration capabilities as close as possible to the data you need to integrate with.

Every integration runtime provides you the capability to move data, execute SSIS packages and dispatch & monitor activities and come in three different types - Hosted in AzureSelf-Hosted (either in the cloud or on-premises) or Azure-SSIS.

Here is an overview of how you can mix and match them. 

Basically, the Azure Data Factory instance itself is only in charge of storing the metadata that describes how your data pipelines will look like while at execution time it will orchestrate the processing to the Integration Runtime in specific regions to handle the effective execution.

This allows you to more easily work across regions while the execution is as close as possible.

As far as I can see, the self-hosted Integration Runtime also enables you to integrate with data that is behind a firewall without having to install an agent like you had to do in the past since everything is happening over HTTP.

Another big advantage here is that you can now run SSIS packages as part of your integration pipelines allowing you to re-use existing business intelligence that was already there, but now with the power of the cloud.

You can read more about the various Integration Runtimes in this article.

New pipeline triggers

Triggers, triggers, triggers! I think this is what excited me the most because because Data Factory only supported building data pipelines for scenarios where data slicing was used.

If you had scenarios where this was not the case, then there was no (decent) Data Factory pipeline that could help you.

The first interesting trigger: On-demand execution via a manual trigger. This can be done via .NET, PowerShell, REST or Python and it can be useful when you want to trigger a data pipeline at the end of a certain process, regardless of what the time is.

A second trigger is the scheduler trigger that allows you to define a very granular schedule for when the pipeline should be triggered. This can range from every hour to every workday at 9 AM. This allows you to still have the simple data-slicing model if you prefer that, or define more advanced scheduling if that fits your needs.

For example, we had to run pipelines only during the workweek. With v1, this is not possible and we have pipeline failures every Saturday & Sunday. With Scheduler Triggers we can change this approach and define that it should only be triggered during the week.

Another great addition is that you can now pass parameters to use in your pipeline. This can be whatever information you need, just pass it when you trigger it.

In the future, you will also be able to trigger a pipeline when a new file has arrived. However, by using the manual trigger, you could already set this up with an Azure Event Grid & Logic App as far as I see.

Last but not least - One pipeline can now also have multiple triggers. So, in theory, you could have a scheduler trigger but also trigger it manually via a REST endpoint.

It's certainly good stuff and you can find a full overview of all supported triggers here.

Data Movement, Data Transformation & Control Flow Activities

In 2.0 the concept of Activities has been seperated into three new concepts: Data MovementData Transformation Activities & Control Flow Activities.

Control Flow Activities allows you to create more reactive pipelines in that sense that you can now react on the outcome of the previous activity. This allows you to execute an activity, but only if the previous one had a specific state. This can be success, error or skipped.

This is a great addition because it allows you to compensate or rollback certain steps when the previous one failed or notify people in case it's required.

Control Flow Activities also provide you with more advanced flow controls such as For Each, Wait, If/Else, Execute other pipelines and more!

Here's a visual summary:

This tutorial gives you a nice run-through of the new control flow activities.

Authoring Experience

In the past, one of the biggest pains was authoring pipelines. Everything was in JSON and there was no real alternative besides the rest API.

In v2 however, you can use the tool that gets your job done by choosing from a variety of technologies going from .NET & Python, to pure REST or script it with PowerShell!

You can also use ARM templates that have embedded JSON files to automatically deploy your data factory.

But what I like the most is the sneak peek of the visual tooling that Mike Flasko gave at Ignite 2017:

It enables you to author pipelines by simply dragging & dropping activities in the way your business process is modeled. This abstracts away the JSON structure behind it, allowing people to jump more easily on the Data Factory band wagon.

By having this visual experience it also gives you a clear overview of how all the services tie together and are also a form of documentation to a certain degree. If a new person joins the team he can easily see the big picture.

However, this is not available yet and is only coming later next year.

Mapping data with Data Flow

One feature that is not there yet, but is coming early 2018, is the Data Flow activity that allows you to define data mappings to transform your datasets in your pipeline.

This feature is already in v1 but the great thing is that for this one you will also be able to use the code-free authoring experience where it will help you create those mappings and visualize what they will look like.

We currently use this in v1 and I have to say that it is very nice, but not easy to get there if you need to do this in JSON. This visualizer will certainly help here!

Improved Monitoring experience

As of October, the visual monitoring experience was added to the public preview which is very similar to the v1 tooling.

For starters, it lists all your pipelines and all their run history allowing you to get an overview of the health of your pipelines:

If you're interested in one particular run, you can drill deeper and see the status of each activity. Next to that, if one has failed you can get more information on what went wrong:

Next to that, you can also filter on certain attributes so that you can see only the pipelines that you're interested in.

Another great aspect is that Azure Data Factory v2 now integrates with Azure Monitor and now comes with built-in metrics such as run, activity and trigger outcomes. This allows you to configure Azure Alerts based on those and can integrate with your overall alert handling instead of only supporting email notifications. This is a very big plus for me personally!

Diagnostic logs can now also be stored in Azure Storage, send to Azure Event Hubs & analyzed in Operations Management Suite (OMS) Log Analytics!

Read more about the integration with Azure Monitor & OMS here.

Taking security to the next level

One of the most important things in software is security. In the past, every linked service had its passwords linked to it and Azure Data Factory handled this for you.

In v2, however, this approach has changed.

For starters - When you provision a new Azure Data Factory, it will automatically register a new managed Azure AD Application in the default Azure AD subscription.

This enables you not only to copy data from/to Azure Data Lake Store, it also enables you to integrate with Azure Key Vault.

By creating an Azure Key Vault linked service, you can store the credentials of all your other linked services in a vault. This gives you full control of managing the authentication keys for the external services and giving you the capability to have automatic key rolling without breaking your data factories.

Authentication with Azure Key Vault is fully managed by Data Factory based on the Azure AD Application that was created for you. The only thing you need to do is grant your AD Application access on the vault and create a linked service in your pipeline.

More information about handling credentials in Data Factory can be found in this article or read more about data movement security here.

Migration Path to v2

As of today you can already start creating new pipelines for Azure Data Factory v2 or migrate your v1 pipelines over to v2. However, this is currently a manual process and not all features from v1 are currently available such as the Data Flow.

In 2018 they will provide a tool that can migrate your v1 pipelines to v2 for you so if it's not urgent I'd suggest to sit back and wait for it to land.

Making Data Factory more robust over time

While I'm a fan of the recent changes to Azure Data Factory, I think it can be improved by adding the following features to make the total experience more robust:

  • The concept of pipeline versioning where all my pipeline definitions, regardless of how they are created, have a version stamped on it that is being displayed in the Azure/Monitor portal. That way, we can easily see if issues are related to a new version that was deployed or if something else is going on.
  • As far as I know, correlation ids are not supported yet in Azure Data Factory and would be a great addition to improve the overall operational experience even more. It would allow you to provide end-to-end monitoring which can be interesting if you're chaining multiple pipelines, or integrate with other processes outside Data Factory. In the monitoring portal, you can currently see the parameters but would be nice if you could filter on a specific correlation id and see all the related pipelines & activities for that.
  • While they are still working on the code-free authoring portal, I think they should provide the same experience in Visual Studio. It would allow us to have best of both words - A visualizer to author a pipeline, jump to the code behind for more advanced things and integrate it with source control without having to leave Visual Studio.
  • Integration with Azure Data Catalog would be really great because then we can explore our internal data catalog to see if we have any valuable data sources and connect to them without having to leave the authoring experience.

But we have to be reasonable here - Azure Data Factory v2 was only recently launched into public preview so these might be on their radar already and only come later.

Conclusion

The industry is moving away from using one-data-store-to-rule-them-all and is shifting to a Polyglot Persistence approach where we store the data in the data store that is best suited. With this shift comes a need to build integration pipelines that can automate the integration of all these data stores and orchestrate all of this.

Azure Data Factory was a very good start, but as I mentioned it was lacking on a couple of fronts.

With Azure Data Factory 2.0 it feels like it has matured into an enterprise-ready service that allows us to achieve this enterprise-grade data integration between all our data stores, processing, and visualization thanks to the integration of SSIS, more advanced triggers, more advanced control flow and the introduction of Integration Runtimes.

Data integration is more important than ever and Azure Data Factory 2.0 is here to help you. It was definitely worth the radio silence and I am looking forward to migrating our current data pipelines to Azure Data Factory 2.0 which allows us to simplify things.

Want to learn more about it? I recommend watching the "New capabilities for data integration in the cloud" session from Ignite.

Thanks for reading,

Tom Kerkhove.

Posted on 4 de dezembro de 2017 20:51

Glenn Colpaert by Glenn Colpaert

The Internet of Things (IoT) is a business revolution enabled by technology and is no longer just for early adopters, it offers tremendous business opportunities.

With Microsoft IoT Central, a new SaaS solution, Microsoft is helping to solve IoT challenges.

Microsoft IoT Central is now available in Public Preview!

 

The Internet of Things (IoT) is a business revolution enabled by technology and is no longer just for early adopters, it offers tremendous business opportunities.

As already explained in this blogpost, the path to build, secure and provision a scalable IoT solution from device to cloud can be complex. Evolving products with IoT in most cases require some up-front investment and a whole new set of skills to be learned.

With Microsoft IoT Central, a new SaaS solution, Microsoft is helping to solve these challenges.

Meet Microsoft IoT Central

Microsoft IoT Central was first announced in April 2017, since then Microsoft has been working with partners and customers to align business and user scenarios with the product functionality in a private preview mode, today Microsoft IoT central is available in public preview.

Microsoft IoT Central is a SaaS (Software-as-a-Service) offering that reduces the complexity of IoT Solutions, it is fully managed and makes it easy to create IoT solutions by removing management burdens, operational costs and overhead of a typical IoT project.

A silver bullet for IoT?

There's more than one approach when building an IoT Solution with the Microsoft Azure platform. With the announcement of Microsoft IoT Central it's important to determine whether you need a PaaS or SaaS offering.

SaaS solutions allow you to get started quickly with a pre-configured IoT solution offering where PaaS solutions provide the building blocks for companies to construct customized IoT Solutions.

The decision PaaS vs SaaS is depending on your business, expertise and the amount of control and customization desired. All these topics are important to make the decision between PaaS and SaaS.

If you need more information please check out following announcement blogposts by Microsoft:

I'll be further exploring this new Microsoft offering in the next coming days and keep you posted on my findings and the outcomes.

Cheers,

Glenn

Categories: Azure, Products
written by: Glenn Colpaert