Table of contents:
The topic: Transfer data between environments using two Power Automate flows
Whether you're a seasoned Power Platform pro or just embarking on your journey, you've likely encountered the need to move data from one environment to another.
There are many different ways to achieve this goal. In this article, we'll explore the advantages of a particular data transfer solution that utilizes two cloud flows.
This can be useful, for example, if you have a solution that you install recurrently on different environments and even in different tenants. For example, you can transfer test data from the source environment to the target environment with this flow solution. Another advantage is that the data transfer can be easily repeated with the re-execution of the flow.
The solution in a nutshell
In summary, the solution is to have a flow with a "When a HTTP request is received" trigger in the source environment and a flow with an HTTP action in the target environment. The data is exchanged via this HTTP request. The two flows are explained in detail below.
Flow 1: Create a Power Automate cloud flow in the source environment
The first flow must be created in the source environment, as from where the data comes from. The first flow has a trigger "When a HTTP request is received" with a "GET" method.
The first action is to query the data, in this example a whole dataverse table (List Rows Action). The data (body) is packed into a "Response" action.
With this, the first flow is already ready. Attention, when saving a link is generated with which this flow can be triggered. We need this link now in the second flow. This link is unprotected by default and anyone who has this link could now query the data. Therefore, please make sure that this solution, without additional security mechanisms, is never used for confidential data.
Flow 2: Create a Power Automate cloud flow in the target environment
We create the second flow in the Target environment. So in this environment, where we want to integrate the data.
We add an "HTTP" action and enter the link of the first flow. With this the actual data transfer is already done. After that the data can be processed as you like. For example, if a table was transferred, the data can be split again with a Parse JSON action and then written to a database.
Tip: Before you insert the parse action, you can already execute the flow. This way you can see if the data comes in and then you can copy the output of the HTTP action and use it as sample payload for the Parse JSON action.
In my example the data is parsed and then inserted into a Dataverse table (with the same structure as in the source environment) with an "Add a new row" action.
Now you can start the flow and see how the magic plays :-)
Conclusion
This post is also intended to show what is possible with HTTP requests. The scenario shown is just one of infinite possibilities. As mentioned above, with the first flow the data is made available unprotected to anyone who has the link. Therefore, confidential data should never be transferred via this route, only test data. Otherwise, you could include additional security measures and use the flow.
Do you like this article?
Please write a comment or leave a like below and follow me on LinkedIn.
If you have any questions, I'm happy to help.
Marc Sigrist (Powerfully GmbH, Switzerland)
댓글