Recognize My Speech (with MS Cognitive Services)

We wanted to test Microsoft Cognitive Services to use them in an innovative project. Based on artificial intelligence algorithms, these features made available allow you to recognize audio, faces and only in some cases also the voice of the speaker. In this post we want to provide some code that we used to recognize the speech recorded through a web page using a simple <video> tag. Let's see how.

Starting Page

The start page has a very simple format consisting of two video tags which serve two purposes. Speech is recorded through the first, while with the second I go to review the recorded video, possibly delete it or send it to Microsoft services to be encoded

JS Code

Let's start immediately with the code contained in the JQuery function. After arranging the command buttons we define what will be the parameters to pass to the videodevice objects present in the DOM. let's start immediately with the code contained in the JQuery function. After arranging the command buttons we go to define what will be the parameters to pass to the videodevice objects present in the DOM. Then we go to use the (readonly) Navigator.mediaDevices property. This property returns an object of type MediaDevices  which provides access to connected media input devices like cameras and microphones, as well as screen sharing. Warning! the behavior is different if you are in localhost or in production. In the first case the property will always be valued and made accessible, in the second case you must necessarily be in Https otherwise things, for security reasons, will not work. You will find that things are fine if the browser authorization popup will ask you to use the camera and microphone

Go on..

After invoking the getUserMedia function by passing it the object with the capture parameters in the callback we receive an object which represents the byte stream coming from our 'video' object. This stream will be the value of the srcObject property. The srcObject property of the media element interface sets or returns the object which serves as the source of the media associated with the element video in our case. The stream will be recorded in an array of bytes declared in the variable chunks[]. Let me show how this video is saved.

Whenever a byte chunk is made available by the stream it is stored in the array. At the end of the video, the blob is created and will be made available to the second video tag to decide, once the result is seen, whether to use it or delete it. When we have decided that the video is correct, then we launch it to the server that will think about using the Cognitive services and give us back what they have elaborated. Let's discover how to send a mp4 video

In order to send a video, the simplest thing is to build a Blob object by specifying the raw data of the stream stored in the chunk[] array and indicating the type of file we are sending. In our case 'video/mp4'. Using then an object of type FormData, we can simulate the presence of a form on the page to collect all the input data to be sent to the server. In our case we add the newly created blob indicating the name with which it is recognized by the server. We adjust the headers and wait for the response through a fetch operation which once completed will allow us to deserialize the response Json to see what our Cognitive Services have understood. As a last effect, in addition to inserting the return text in a part of the page, we use the speechSynthesis Browser object which will read the result.

The Server Side ( Core C#)

Let's go to the server side. Let's see first how you receive a file sent from the javascript just seen in an core action

The IFormFile interface (with the same name indicated in the Javascript part) is linked to the data stream coming from the ajax request. If everything is ok then its length will be greater than 0. At this point we must store the byte streams in files on the server part (in the server file system) in order to be subsequently manipulated and extract the audio part. Let's dive in this part of the code.

We initially return the file to its .mp4 format and then, from the latter, we obtain the audio part by storing it in a .wav file. To do this, you can directly use the .NET version of the FFmpeg libraries (free and completely cross-platform) or rely on third-party libraries that also support the .NET Core version. At this point we have the wav file to submit to our Microsoft Cognitive Services. We have two options as shown in the official documentation. Either completely server-side objects are used which refer to the SpeechRecognizer class, or APIs are used which can therefore be called via an HttpClient, while still providing the same type of information. It goes without saying that it is necessary to have an Azure subscription and to have activated the Speech Recognition service which for a limited volume of requests remains active at very low costs. We have choosen the second one. Let's see in particular the construction of the http request.

After sending the wav stream, with the headers properly filled in with the Subscription Key obtained when the service was activated on Azure, the server replies with a Json in the format that we illustrate below:

It is interesting to note, from an Artificial Intelligence point of view, how the answer is accompanied by the degree of confidentiality with which our speech has been recognized. It should also be noted that the Endpoint used has as parameter in the querystring the indication that the language to be treated is Italian (?language=it ....)

The Result


You can see the finesse with which the Cognitive Services have perfectly identified the name of Alexia (with the X) while the browser's text-to-speech service still makes a bit of confusion with the correct accent. Obviously this was just a demonstration of how effective and precise the AI services provided by Microsoft are. If you try them you will notice that the degree of confidentiality of the answer is always very high. We also found this in facial recognition services. For us Micorosft has done a really great job.

Some tips on smart contract migration to Azure Blockchain Service (Quorum)

Tools Versions

The first thing to check is the version of the various tools that are necessary for the creation and migration of the various smart contracts. To check the version of the various tools just use the following command line

It is essential to adopt exactly the versions as shown in the figure, especially as regards the version of Web3.js which must necessarily be 1.0.0-beta.37. It is also necessary to make sure that the Python version is 2.7

New Account

At this point it is necessary to create a new account within our blockchain. We declare the network with all its features in the truffle-config.js file

Then we connect to the node through a truffle console and use the following command line:


We modify the truffle-config.js through the insertion of the new account that we will use for the migration:

Now Migrate!

We underline an important point. As we see in the figure we had to insert a directive concerning the solidity compiler. When we entered the indication on the type of Virtual Machine (in our case the 'byzantium' which apparently is not the default one), our migration was successful through the following command line:

I recommend using the --reset option


A more powerful IoT data management in a Ethereum-like Blockchain


In the previous post concerning the management of some data coming from an ioT device, we had created a smart-contract that could contain all the data exactly as they were transmitted by the device. The data was stored on the blockchain exactly in the sending format. A fundamental problem now arises. How much data are we willing to store in the Blockchain in the case of very frequent arriving data? Is it necessary to create a smart contract for every type of data coming from the device? We can think of storing only the hash of the data and storing off-chain all the data coming from the device. Let's see how.

Storing the data Hash 

Let's look at the picture below and then describe it in detail:

1) the IoT Device sends its data in the default format for that specific device

2) the IoT Hub (in our case an Core Web application) receives data from the device, creates the Hash and then sends (as parameters of a smart contract method)  the hash and an array of strings containing all the data received to the blockchain 

3) The BlockChain creates a smart contract that contains the newly transmitted hash

4) This causes an event that retransmits the input data (concatenated by means of a special character, in our case an asterisk). In this way we are sure that the blockchain has received exactely those data that must obviously coincide with the data received from the device.

5) A record is created on an off-chain Data Base that will contain the newly created Blockchain address and the transmitted data linked by the special character ('*')

What are the advantages of this approach? Surely only the hashes of the data are stored on the blockchain. It simplifies the type of smart contract used which will have only the field containing the data hash, therefore the smart contract can be used for all types of devices, without having to create a type of smart contract for each different data format.

The smart contracts

The first smart contract that we see is the smart contract that contains the data hash

We must note the first pragma directive. This indicates that we are using a new (experimental) version of the ABI encoder. Naturally, the Ethereum team's advice is to not use this directive in contracts in production, because always in an experimental phase. Furthermore we know that under certain conditions (certainly rare and complex) this directive can introduce vulnerabilities. However, this directive must be used if, for example, we want to pass an array of strings to a method or a constructor as in the case of the smart contract illustrated above.

In the figure above we have instead the first part of the smart contract that manages the storage and retrieval of data in IoTHash contracts. Note how the DataArrived event is defined, which is invoked when the new IoTHash smart contract instance is stored and returns the concatenated data and address of the newly created smart contract.

Managing strings in Solidity

The second part of the smart contract code concerns the methods for managing the creation of the new smart contract.

In input the method receives both the data hash and the string array containing the data coming from the device (the array is not sized, so I can pass an array with any number of elements). In Solidity the string management is really very particular (in the sense that it must be done practically everything by hand) and in this case we have resorted to a particular function of the 'abi' object called encodePack to which we can pass two strings and these are concatenated managing the sequence of the characters of the string as a succession of bytes.

All data is stored off-chain and it will be possible to verify its correctness with the following procedural scheme:

1) The application reads the record with the data to be verified from the off-chain database

2) The hash of the data received at the time from the device and the corresponding Blockchain address is calculated

3) The Blockchain is queried to obtain the hash stored at the address. Recall that this hash was calculated based on the data received at the time the device sent it.

4) if the Hash it's the same then the off-chain data are authentic and unchanged as transmitted by the device.


We have seen how it is possible using arrays of strings, managing any type of data coming from a device, thus being able to always use the same smart contract on all occasions. The only caveat is the use of the experimental pragma directive for using arrays as parameters.


The Web App iOT Client (part three of three)

And we finally arrived at the last part of our Blockchain iOT (proof of concept) POC where we will explain how to read data from a web page that the API server (which we described in the second post) stores on the blockchain. In this case we have the following scenario: the device (see the first part) is transmitting data relating to temperature, humidity and GPS position to a rest server API which in turn registers them within the blockchain, creating a smart contract called IotData for each data received. In the blockchain there is another smart contract, called IotDataManager that collects all these instances and makes them available by exposing some methods.

Web App Architecture

The web app it's very simple. It is a website written in C # with Core technology and practically made up of a single controller with two actions: the first one refers to the page concerning the reading of the temperatures on the blockchain, the second one concerns the detection of the device path by detecting the GPS points stored during the recording of values. In both cases no server technology will be used, but the Web3.js library will be used, a javascript library that simplifies RPC communications with the Ethereum endpoint through an ad hoc provider

Let's start

First of all we have to install the Web3.js library. You can do it with the command line using npm (install it if you don't have in your pc)


npm install -g web3

We used an interesting chart.js plugin that is able to process data in real time by taking them from a stream. The plugin can be downloaded from the site and is very easy to use. At this point, all that remains is to dive in the code and analyze its main parts. Insert the link to the js libraries used in the code:

<script src="~/lib/moment/moment.min.js"></script>
    <script src="~/lib/charts/chart.js@2.8.0.js"></script>
    <script src="~/lib/charts/chartjs-plugin-streaming@1.8.0.js"></script>

When we want use the web3 library we have to choose a provider. We now are using Ganache that simulate a real Ethereum Blockchain and so we'll connect through a http provider. Use the following line of code to instantiate your provider

web3 = new Web3(new Web3.providers.HttpProvider("http://localhost:7545"));

You should always check the version of the library to avoid any syntactical errors. To check which is the current library's verse use the following line of code (possibly run from your browser's command line)

var version = web3.version.api;

For each Smart Contract we want to use we need to get the Json chaimata 'abi' format interface. This will allow us to recall an instance of that particular Smart Contract from the blockchain. In addition, to request a particular instance we must also provide its Address assigned by Ethereum at the time of creation. We get this address when we perform the smart contract migration on Ganache via the 'truffle migrate' command

var abi = JSON.parse('[{"constant": true,"inputs": [{"name": "","type": "uint256"}],"name": "iotdata",...   
var address = "0x7e03d3fEd772AFA633B3Fe0889Be03bA4F5d591E";

Here the result of the migration of IotDataManager Smart Contract where you can get the address of deployment

We get the instance (the only one) of the IotDataManager smart contract

 var iotDataManagerContract = web3.eth.contract(abi);
 var iotDataManagerContractInstance =;

Now we are ready to invoke any method, both those that read the status of our Blockchain and those that carry out transactions with relative use of Gas. First we see how much data has arrived and stored in Blockchain. Recall that every reading that comes from the device and is stored in the Blockchain causes the creation of an IotData type smartcontract which is in turn stored in the iotDataManager which simulates the functionality of a DataBase. Let's invoke the method 'getNumberOfIotData()' and we get this kind of response (as you can get from chrome debugger)


In the solidity code of the smart contract IotDataManager there is an array containing all the addresses of the IotData smart contracts and an associative array that returns a structure with the recorded data corresponding to an address

let's take the last recorded data with its temperature and update the dashboard

 var lastDataAddress = iotDataManagerContractInstance.getIotDataAddress(tot - 1);
 var lastResp = iotDataManagerContractInstance.getIotData(lastDataAddress);
 var lastTempValue = parseFloat(lastResp[1]);

 Here is the code of updateDashboard(lastResp)

Now let's analyze the last part of our javascript code

The onRefresh() function is part of the graph configuration that is displayed and indicates which data must be published in correspondence with the Cartesian axes of the graph itself. The current time is reported on the X axis and the result of the call to the getIotdata () function is reported on the Y axis. This function checks that new values have arrived. If they arrived then it takes the last arrived (remember that the device launches the data once per second) otherwise it keeps the last value. The final result is the graph shown below where at the Out of Range values ( > 25 dgree) there is a registration in the table on the right

With the same type of logic you go to query the blockchai to capture for example the last 10 positions indicating exactly where the device has actually covered the GPS points

The visible result on a Google Maps will be the path of the device reported by the Makers that are inserted on the map at each recording


In this series of posts we have covered the main aspects related to a Proof of Concept of type iOT Blockchain. We have deliberately omitted all those aspects of authentication and security necessary on these occasions which can then be subsequently implemented as an improvement to the POC





REST Server Interface between iOT device and Ethereum Blockchain - Part two of three

We continue in our example. After configuring our iOT device (see my previous post) so that it can transmit data to the REST Server, let's see how to configure the API service that will transmit the data to the blockchain after receiving them from the device. In the below picture you can see the architecture of this Proof Of Concept:

The server API is an ASP.Net Core server consisting essentially of a single controller API. The controller receives the data from the device through a POST call (see the python client deployed on the device Linklt), receiving a very simple Json that is mapped in a POCO class structure of this type:

The controller method that implements the POST does nothing more than prepare the message to transmit the transaction data to the Blockchain and wait for the response of the transaction on the blockchain itself. In the next section we'll see what kind of response the server is receiving. Here the server controller code

        [Route("api/temphum", Name = "TemperatureAndHumidityFromDdevice")]
        public async Task<IActionResult> TemperatureAndHumidityFromDdevice(IOTData iotData)

            var web3 = new Web3(ethereumAddress);

            var gasForDeployContract = new HexBigInteger(6000000);
            var deploymentMessage = new IotDataDeployment()
                FromAddress = BC.GetDefaultAccount(),
                Gas = gasForDeployContract,
                DeviceID = iotData.DeviceID,
                Temperature = iotData.Temperature,
                Humidity = iotData.Humidity,
                Lat = iotData.Lat,
                Lng = iotData.Lng,
                IotDataManagerContract = BC.GetIotDataManager_Address()

            var deploymentHandler = web3.Eth.GetContractDeploymentHandler<IotDataDeployment>();
            var transactionReceipt = await deploymentHandler.SendRequestAndWaitForReceiptAsync(deploymentMessage);
            var newIotAddress = transactionReceipt.ContractAddress;

            String result = newIotAddress;
            return Ok(result);



How are smart contracts structured on the blockchain? The figure below describes one of the two smart contracts deployed on the blockchain called iotDataManager.

This smart contract works as a database of device data. Whenever the API Server makes a call it actually invokes the constructor of the smart contract called iotData whose code is reported entirely below

import "./IotDataManager.sol";
contract IotData {
     string public deviceid;
     string public temperature;
     string public humidity;
     string public lat;
     string public lng;   
     constructor (
         string memory _deviceid, string memory _temperature, string memory _humidity, string memory _lat, string memory _lng, address _IOTDATAMANAGER_CONTRACT) public {
             deviceid = _deviceid;
             temperature = _temperature;
             humidity = _humidity;
             lat = _lat;
             lng = _lng;
             IotDataManager manager = IotDataManager(_IOTDATAMANAGER_CONTRACT); 
             manager.storeIotDataReference(address(this), _deviceid, _temperature, _humidity, _lat, _lng);             
          function ()  external {
            // If anyone wants to send Ether to this contract, the transaction gets rejected
            revert("no eth accepted");

The constructor receives the device data and the smart contract address iotDatamanager as input. Inside the constructor we already know the address that the Ethereum Virtual Machine has assigned us and this allows us to invoke the iotDataManager method that stores data in correspondence with the newly created smart contract address. As you can see from the figure above the smart contract iotDataManager will then offer the Dapp the possibility to invoke the other method to retrieve data at the respective address, but we will see this in the next post in which we will describe the web app that will use web3.js.


An important thing to take into account is the fact that after creating the Server API and verifying that everything works using IIS Express, everything must be published on an IIS Server otherwise our device will not be able to communicate. IIS Express is in fact just a process that uses the Loopback interface which is fine for all the processes inside our machine, but which is not visible to any external client to our machine.


Once our API Server is published we can run the Python program on the device and see how the blocks related to each call are concatenated on the Blockchain, while on the console of the device we see the addresses of the smart contracts iotData created for each call


see you to the next post where we will see how to build a web app that using web3.js technology will display real-time data coming from the device

Configuring Linklt Smart 7688 as a Rest Client (to use it as a Blockchain iOT client)

We took a Linklt Smart 7688 device and configured it to make it a Restful client via a simple application written in Python. Of course then we will use this device later as an iOT device that transmits data on an Ethereum blockchain.

Linklt Smart 7688

The chosen device equips a Linux distribution defined as OpenWrt and is ideal for doing enough interesting tests for iOT applications. The system already starts with many useful features for the development of dedicated applications. There are already many modules that can be used by those who develop complex appications, besides the possibility of plugging sensors of all kinds. We already find on the card for example packages such as python, node.js, git, curl and so on, and you can easily install any other packages that may be needed

Let's start to using it

First the device must be powered so take your mini-usb charger and connect it to the port with (very small) PWR written. Immediately the device turns on and the boot starts. You can see that the WiFi LEDs are turned on and as soon as it turns green it means that the device is able to provide a WiFi network to which you can connect via our PC (the device is in AP mode). Look for it among the available networks. The default name is Linlt_Smart_7688_xxxxx as you can see looking at the picture below

At this point, connect to the device's network and get an IP address with which you can communicate later. The subnet used by the LinkLt is 192.168.100.x

Open a browser (NOT Edge because the execution of the script for administrative management crashes, we have used chrome) and connect to the address which will open the login page to access the administration site. The first time you connect, it will ask you to set a password that will be persisted on the device until a possible hard-reset

Once inside we move on to the network tab where it will be possible to switch from the AP mode to the Station Mode in which it is possible to connect the device to the local WiFi network we want, by entering the appropriate network password. At this point also the device can access to internet and we too can switch to the local WiFi and share the same network of our device. After restarting, LinkLt is ready.


To detect the IP received by our device use a Network Scanner to discover all assigned IP Address. We've used IPScan a free tool that you can download from internet. Our result are shown below:

Connect to Linklt

To connect to the device we will use an SSH connection using the classic SSH client called PuTTY downloadable from the network as open source software.

Once the PuTTY console is open, we insert the IP address of our device and get a root console (make login with credential 'root' and the password you've choosen before) from which we can start to make the device operational. From the command prompt, download the python responses module using the language package manager called 'pip' and give the command as shown in the figure

Write the Client

At this point, all we have to do is start writing some code. We should not expect to have Visual Studio installed and for this we will have to be content with a historical editor for those who have been using Linux for a long time called Vi, obviously already present in the device as part of the openWRT system. Type vi from the command line and open our editor. We enter editing mode (one of the two modes of vi: 'editing' and 'command') and insert the following lines of code:

Enter 'Esc' and 'colon' and wq to save and quit from the editor. Here the complete code:

import requests
import time
import random
import json

count = 0

lats = ["44.534041", "44.535264", "44.536947", "44.539325","44.540732","44.542139","44.543852","44.545542","44.546689","44.548188"]
lngs = ["11.124427", "11.121359", "11.117056","11.110705","11.107121","11.103366","11.098850","11.094579","11.091575","11.087616"]
while (count < 10):
    rnd = random.uniform(11, 30)
    truncatedrnd = round(rnd, 2)
    iotData = {'DeviceID':'Link Smart 7688', 'Temperature': str(truncatedrnd), 'Humidity': '89.2', 'Lat':lats[count], 'Lng':lngs[couunt]}

    h = {'Content-Type': 'application/json'}
    rp =""http://localhost/iotapi/api/temphum", data=json.dumps(iotData), headers=h)
    count = count + 1   
print ('end loop')

The code is very simple. We define two arrays of strings containing the latitude and longitude of ten GPS positions. We begin a cycle (only of ten iterations) in which a random float is chosen between the values 11 and 30, the decimal digits are truncated after the second and the payload of the call Rest is constructed. After declaring a header containing the Content-Type the http request is executed indicating the URL, the Payload and the header just built. We wait synchronously for the answer which will then be printed on the screen. At each iteration, you wait for a second. If you use the https protocol add the verify = False parameter to the call so as to avoid verification problems of a non-secure certificate (such as for example a self-signed certificate).

Launch the client by the command prompt line python The server listening for this call is an API server that inserts this data into a Blockchain creating a smart contract instance for each call. The result of the call is the address of the newly created smart contract

 Our client rest has been programmed to simulate sending the temperature and GPS position and can be used in many areas where a cold chain must be certified with absolute certainty (transport of medicines, transport of frozen food, ... ). The use of this device, however, can be absolutely extended to domotics (we think of the detection of images in a security system) or to new applications for the automotive sector (let us think, for example, of the registration of the paths of a car in the insurance sector). You will simply need to insert the detector to obtain the necessary physical data.


In the following posts we will see the part related to the Server API that receives the data from our device and inserts it in the bloackchain, while in the final post we will see a web application that reads the data in real time from the blockchain as soon as inserted by the API server part.

Using Nethereum.Web3 to implement a Supply Chain Traking System

Source Code

All code used in this post (C#, Solidity, DB scripts) can be downloaded at my Git repository


I had the opportunity to test the Nethereum.Web3 library by implementing a Supply Chain Traking System (SCTS) project and it proved extremely interesting from different points of view. During this post we will see how you can interact from .NET code with a BlockChain Ethereum making calls to Smart Contract methods.


What is a Supply Chain Tracking System (SCTS)

There is an official implementation of such a system that you can download at this address The definition that is given is this:

"STCS is a smart contract to keep track of products from the raw materials to the shopping stage." This system is powered by the Ethereum Blockchain, so it's fully distributed, immutable and auditable ". The version, however, here implemented differs a lot in the process flow architecture and in the relationship between entities. In practice we have this type of flow:

A Handler (which can be any user who manipulates the product at a certain stage of the transport chain) at some point takes charge of a product and applies an action to the same product (for example "I take the product from the Handler2", or " Deposited the product in Warehouse X "). A Handler is always associated with a user who must log in and to which always corresponds an Ethereum Address as an Ethereum Account . Every action applied to that product and performed by a certain Handler is memorized in the BlockChain and can be subsequently retrieved reconstructing the history of the product itself. Actions can be applied until the product reaches its destination and at that point is considered "consumed" and no action can be associated with it. Here is a process flow diagram:

Web Application

The application consists of a .NET Core 2.0 web portal which is accessed through the IdentityServer. The Database was constructed in such a way that each account was associated with a BolockChain Address (Account Ethereum), so that a column was associated to the AspNetUsers table of Identity Server

To be able to return the BC Address at the time of authentication, IdentityServer will insert this information into a claim that will be added to the predefined claims collection that Identity returns in the token of the user who has just accessed the portal. To make everything simpler we have chosen to extend the ApplicationUser class with a property that will contain the BC Address:

And now we come to the MVC application. First we need to add the Nethereum.Web3 package via Nuget

Now we have all the ingredients to interact with our Blockchain, we will see later the part of the Smart Contracts involved in the implementation on the Ethereum side. Once logged in the portal we can browse the product page where the list of products already entered is displayed. This page begins with a call to the BC to interrogate a smart contract "Database" that contains all the products. As we can see, the page contains the BC Address on the top, as evidence of the association between this and the username in the upper right corner

The page must necessarily execute some calls to BC Ethereum in order to retrieve the list of products. So here the use of our Nethereum library begins. Let's see the code in detail. First of all, the claims containing the Ethereum address of the user are retrieved from the IdentityServer. Once retrieved, we read from the configuration file the address of the Ethereum network that we give to the Web3 class constructor to define the instance representing the context of the Blockchain where we will operate.

With the Web3 instance we can now access the smart contracts published in the BC and invoke the methods. To invoke a method of a smart contract it is necessary to obtain the ABI (Application Binary Interface) and the address of the contract itself. What is an ABI for? We must think that a smart contract in bytecode was distributed in Ethereum. When we want to invoke a method then we must follow an extremely rigorous formalism that forces the caller to pass the parameters of the call in a way that perhaps is not usual to those who program only in high-level languages. Following the specifications of the Ethereum documentation we can see how to call a method with two whole parameters in input it will be necessary to pass 68 bytes to the call to the BC. It starts from the first 4 bytes which are the first 4 bytes of the result of the hashing operation of the method signature. Example if the method has the following signature

function baz(uint32 x, uint32 y) public pure returns (bool r)

then the first 4 bytes of the operation are taken

bytes4(keccak256("baz(uint32,uint32)") = 0xcdcd77c0

then follow 32 bytes for the first parameter (padded 0) and the other 32 bytes of the second parameter (padded 0). The thing is evidently much more complex than the passages of parameters of high-level languages where everything is hidden by compilers, which however always adopt ABI hidden mechanisms at a high level, but absolutely present at the machine language level. To take a further example, let's think about the function calls of functions at the assembler level. In that case the stack represents the portion of memory that contains the ABI for the call to function by the caller.

Where do we find the ABI of a contract? The Solidity compiler provides it when it compiles a smart contract and the result will be a Json file containing among other things also the bytecode of the contract itself:

Above is shown the portion of ABI in Json format relative to the 'getProduct' method which in input wants an integer and returns the address of the relative contract.

For convenience, a 'BlockChain' class has been built containing all the ABIs and all the Addresses of the contracts. Finally, the contract address is very simple to retrieve during the contract migration phase. We have used Ganache as a Blockchain Ethereum using the default addresses provided

In our case the smart contract Database has address 0xde554c0b4ca9efcf1958c01485d673a0b06d5bc1

Let's get ready for the various calls

We recover the Contract object that our library abstracts for us with good results. The Function object is then retrieved which abstracts the method present in the smart contract and then the asynchronous call to the method is made. This call returns to us the number of Product Smart Contract really recorded in BC. For each of these we will invoke another function of the Smart Contract Database to which we will pass the cycle index, returning the address of the smart contract. Once the address of the Product has been retrieved, we will retrieve the names of the product properties and read them through appropriate calls as shown:

With the properties of each Product we will create the corresponding DTO to be included in the model to be displayed on the page. Now let's insert a new product. That is, we imagine that a new product is created within the company, or a new batch of materials that will then be sent to the end customer:

To get more precision we also store the location where the product was created. Once created we see it in the list:

How did the creation of a new smart contract product work in the blockchain? In this case the operation obviously has a cost (in Gas) unlike the consultation operation that does not change the status of the Blockchain and therefore does not require any contribution in Gas. To prepare the call data we construct an object that contains among other things also the Gas to 'pay' the transaction in addition to the other parameters that consist of the product name, additional information (product description) and geographical coordinates in addition to 'address of the smart contract' Database '. Here is the code in detail:

Now we add an action to the product. For example, let's say that the product has been stored in the main warehouse

The technique from the point of view of Nethereum is the same as the previous transaction that created the Product contract. In pratical invoking a method to retrieve the address of the Product contract to which to add an action, setting the parameters to invoke 'addAction' to the contract and view the product history. If at this point the product was delivered to another user, an action would be added and, by accessing another account, the whole chain of steps would be displayed, representing the various actions performed on the product up to the end user (at this point we say that the product is 'consumed' and any other further action is not permitted), which could use an app to verify the whole chain of steps seeing that the product has followed a certain path from leaving the factory until you get to him.

Smart Contract

We come now to the solidity part. There is a basic contract called Owned from which the DataBase contract is derived, for example, and which allows operations to be performed on the latter only if the initial (user) address has been configured.

The Database contract also contains the following features to manage the contracts within it:

Here instead I reproduce the code of the contructor of the Product, which has as its parameter the address of the contract database and this will be very interesting to see it during migration

Here is how a migration file must be made that contains a contract (product) inside it, which in order to be initialized must wait for the creation of the Database contract and receive its address



The SCTS system implemented through Blockchain Ethereum guarantees the total integrity and immutability of the various steps that characterize the life of a product. The more the product is characterized by a univocal recognition, the more this chain guarantees maximum reliability, offering guarantees regarding the integrity of the product in addition to its origin (we think, for example, the great use that can be made to guarantee the authenticity of some Made in Italy products). The code as tradition can be retrieved from the git project that contains, in addition to the C # code, also the script to generate the DB and the smart contract solidity code to be created in Visual Studio Code. The users already entered are the following:,,, 

All use the same password Pa$$w0rd

Happy code!


How IQC (Italy) implemented an Openbadge Blockchain system


IQC is an Italian company based in Bologna. 

IQC is a leadong provider of consulting services focused on supporting companies to improve processes, products and skills in order to bring them to achieve best performance. IQC is able to realize it thanks to innovative tools like Digital Badge and Performance Digital Tracebility. The value of IQC is based on the long experience of its founding partners that have supported several organizations in their knowledge-enhancement fostering italian know-how in the domestic and foreign markets.

The time of Industry 4.0 and human capital Big Data, IQC proposes the use of the Digital IQC Badge, an innovative tool providing a digital representation of the performance of organizations, processes/services and products as well as a wide ranging view of the skills and competences of workers.

As you can read in the official site "Open Badges" are verifiable, portable digital badges with embedded metadata about skills and achievements They comply with the Open Badges Specification and are shareable across the web. the badge, its recipient, the issuer, and any supporting evidence All the information can be viewed in the form of a badge. Specification, from non-profits to major employers".

Badges that are issued by IQC (or by companies in agreement with IQC) are acquired as a result of gains of skills of any kind and then used by their owners to demonstrate what has been acquired. In order to give added value to the information contained in the badge itself, it was decided to insert the badge into a Blockchain to seal the information contained in the badge and make the badge no longer modifiable.


We have chosen to use the Azure Workbench Blockchain infrastructure made available by Microsoft as a Cloud service. The scheme is this:

  1. Through the CBOX portal at http: http: //, you authenticate yourself as a issuer and issue badges after someone obtains certain skills 
  2. The badge is managed and stored by the internal structures of the CBOX application
  3. At the same time a request is made to a gateway (Web App) for the insertion of the badge inside the blockchain
  4. The Badge is inserted in the Blockchain Workbench Azure
  5. Through the same gateway you can check the information entered in the Blockchain simply by providing the Badge ID

Workbench Workflow

As we have already investigated in my previous post the Azure Blockchain behaves like a finite states machine where there is a workflow of a contract that goes from one state to another. This flow represents the life cycle of the contract from when it is created until it reaches a state of success or possibly a state of error. In our case the workflow relates to a single contract called BadgeV2 which has a diagram of this type:

We have chosen to enter an additional transaction in the status of Issued to allow a issuer to invalidate (for any reason) a badge once issued. This operation can be done by Issuer himself who issued the badge.

Check a Badge

To check the presence of your badge, simply enter a URL that points to the Gateway application with a querystring containing the ID of the Badge to be checked or by clicking on the link in the CBOX portal where I can see all the badges issued in my name as shown


or connect to the search page and manually enter the ID in the textbox as shown in the figure:

Technically, we have taken advantage of the possibility offered by Azure Workbench Blockchain which offers access to an off-chain DB that is kept in sync with the information inside the Azure Ethereum nodes. This feature allows us to perform queries using EntityFramework without having to use the Rest API as described in the previous post. During the creation of the distribution, the Sql Off-Chain Synchronization Server DB is created with a series of already predefined Tables and Views. However, if we use (as we did in the Web App Gateway) EntityFramework Core is not possible access to the views (at least for now and without making strange tours), so we had to rewrite all the Lambda expression to access the properties of the contract corresponding to the issued badge.

The result of the query is the summary found in the blockchain done like this:






Using REST API in Azure Workbench Blockchain


You can find all the code of the Web App WebClientWorkbench with calls to the Azure Workbench Blockchain Rest APIs here (Git Repository)
You can find the HelloBlockchain sample code here


We started using Microsoft Workbench Blockchain to implement an OpenBedge management application through the Blockchain. Following the official Microsoft tutorial we finally got the Blockchain version recently made available by the Redmond house. Prerequisites to follow the example of the post is the presence of a Workbench Blockchain distribution on Azure and the creation of the HelloBlockchain test application that you can find in the examples available on Git. In this post instead of using the administration app provided by default during the creation of the distribution,a Web App (.NET Core) will be used which, through calls to the Rest API services  will interact with the HelloBlockchain application. The scheme is as follows:


Azure Active Directory App registration

Our Web App will use Azure Active Directory for authentication, thus exploiting the Oauth2 integrated authorization mechanism. This must be implemented by registering an app in Azure Active Directory with the features we will see shortly. Once the user has accessed our Web App using the credentials of a User in Azure Active Directory, it means that he has obtained an Authentication Token (Auth Code). In order to invoke the Blockchain APIs, however, the Web App must use its own Authentication Token (Auth Code) to obtain an Access Token to the Blockchain API resource. Once you have the Access Token you can finally use it (by inserting it in the request header) to invoke the Blockchain's Rest API. The scheme is:

Let's start by creating a new Asp.NET core MVC project in Visual Studio and write down the default URL (eg http: // localhost: 51369). Now we're going to register an app in our Azure portal. From the Active Directory menu go to the "Registration App" blade and start recording our app with these features:

It will be essential to provide our newly registered app with two key features. The first is a key called ClientSecret that will be useful to prove your identity, i.e. your credentials, when requesting an Access Token. The second will be the authorization to use the Blockchain API. In this way, when an Access Token is requested, it will be created embedding in it the authorization to use the Blockchain APIs. For the key go to the Properties of the app just registered and enter the blade Keys and produce the key, taking care to immediately store it as soon as it is created otherwise we will not be able to recover it.

At this point we enter the "Required Permission" blade and advance to the selection of the API to be authorized. In the API search text file, enter the name of the API created during the Azure Workbench Blockchain deployment. We kept the default name "Blockchain API" as we can see from the list of registered apps:

It will be shown the possibility of assigning the app to use the blockchain as a Global Administrator or just to access the APIs. We choose this second option:

Finally to complete the operation we must not forget to click on the "Grant Permission" button, otherwise we will have only set the permissions, but we will not have activated them

We note the Application ID and Client Secret. We'll also need to take note of the ID of the Blockchain API app and the base URL by going to see the app's properties. Other information to note is the tenant domain name that we use, the Tenant ID (which is the unique ID of our Azure Active Directory). With this informations we return to our .NET Core Web Application and set the appsetting.json file with these features:

Web App Asp.NET Core

And we come to our web application. Let's try now to test if the flow we have described above really works.

We have available a controller (Accountcontroller) that at the moment of clicking on the "Sign In" it redirects us to the default Microsoft Endpoint to be able to log in to our Azure Active Directory. This allows us to use AAD users who are automatically mapped to Ethereum's address transparently by Workbench.

We insert a Controller called BlockchainController to which we assign an action called Index that will contain a first call to the Blockchain API requesting the list of the Application Objects present in our distribution. Inside the method the code looks like this:


// Because we signed-in already in the WebApp, the userObjectId is know
                string userObjectID = (User.FindFirst(""))?.Value;

                // Using ADAL.Net, get a bearer token to access the WorkbenchListService
                AuthenticationContext authContext = new AuthenticationContext(AzureAdOptions.Settings.Authority, new NaiveSessionCache(userObjectID, HttpContext.Session));
                ClientCredential credential = new ClientCredential(AzureAdOptions.Settings.ClientId, AzureAdOptions.Settings.ClientSecret);
                result = await authContext.AcquireTokenSilentAsync(AzureAdOptions.Settings.WorkbenchResourceId, credential, new UserIdentifier(userObjectID, UserIdentifierType.UniqueId));

                HttpClient client = new HttpClient();
                HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Get, AzureAdOptions.Settings.WorkbenchBaseAddress + "/api/v1/applications");
                request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", result.AccessToken);
                HttpResponseMessage response = await client.SendAsync(request);

                if (response.IsSuccessStatusCode)
                    JsonSerializerSettings settings = new JsonSerializerSettings();
                    String json_string = await response.Content.ReadAsStringAsync();
                    ApplicationReturnType applicationsResponse = JsonConvert.DeserializeObject<ApplicationReturnType>(json_string);

                    List<IQC_WebClient_Workbench.Models.Application> applications = applicationsResponse.Applications
                    return View(applications);


Having already logged into my Web App via the Sign In link at the top right, I have already obtained an Authentication Code and therefore the userObjectID is already known, which I will use later to create a unique identifier. I create an AuthenticationContex that uses a Cache to store the already requested Access Tokens and possibly not ask them for each request. In our case we have built a class that inherits from TokenCache that uses the Session to store Access Tokens. I create credentials using the app ID registered in Azure Active Directory and its associated Secret Client. At this point you just have to invoke "AcquireTokenSilentAsync (String, ClientCredential, UserIdentifier)" to get my Access Token.

Once obtained, use it as a header value (header Key = "Bearer") for the request to the Blockchain API. The request uses the url "/api/v1/applications" to which the base URL stored in a key must be placed in the appsettings.json. If everything is OK, the Blockchain API will return the list of Applications in the Workbench distribution


How Workbench REST API works

Let's start with the application ID that is important to us. In our case we see that HelloBlockchain has the iD 1. Let's use it to get the application Workflows. Workflows are the flows that describe exactly the status changes that the various contracts of the application make as a result of calls of functions of the related Smart Contract. Our application is extremely simple and contains only one workflow with just one HelloBlockchain type contract. Workbench has a very interesting approach because it considers the application as a finite state machine where each contract precisely follows a workflow passing from one state to another. In our case the application workflow is this:

As we see, once created, the contract goes into the Request state. In this state we can invoke a function of the smart contract (sendResponse) that will bring the contract into the state of Respond. At this point the contract can return to the status of Request again invoking SendRequest and will remain here waiting for a subsequent response. Who can invoke the methods of smart contract? Users obviously belonging to the roles of competence (Requestor and Responder). Users are those of AAD and the Workbench Admin can assign the correct roles from the Dashboard that is created by default at the time of distribution.

In correspondence to all this in the solidity smart contract we can see:

States become an enum in solidity. In the SendRequest function, it is checked that the address Ethereum of the invoker is the same as the Requestor stored at the time the contract is created as shown in the constructor's code:

(Note that the Workbench compiler does not yet support the keyword 'constructor' for which you need to write a function with the same name as the Smart Contract which is rightly reported as a warning from Visual Studio Code). The Requestor and the Responder are the addresses of Ethereum in correspondence of those who make requests and those who answer. The passage of states is described in the HelloBlockchain.json file that must be fed to Workbench when the application is created. But let's focus on the API again, leaving the in-depth description of the Workbench elements in another post.

After recovering the usual Access Token (which in the meantime may have been renewed silently) here is the invocation to receive the workflow of our application:

 HttpClient client = new HttpClient();
                HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Get, AzureAdOptions.Settings.WorkbenchBaseAddress + "/api/v1/applications/1/workflows");
                request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", result.AccessToken);
                HttpResponseMessage response = await client.SendAsync(request);

                if (response.IsSuccessStatusCode)

                    JsonSerializerSettings settings = new JsonSerializerSettings();
                    String json_string = await response.Content.ReadAsStringAsync();
                    WorkflowReturnType workflowResponse = JsonConvert.DeserializeObject<WorkflowReturnType>(json_string);

                    List<IQC_WebClient_Workbench.Models.Workflow> workflows = workflowResponse.Workflows;

                    return View(workflows);

Here is the result of the call:

Similarly for when contracts related to a workflow are required (in our case there is only one contract type - HelloBlockchain):

 HttpClient client = new HttpClient();
                HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Get, AzureAdOptions.Settings.WorkbenchBaseAddress + "/api/v1/contracts?workflowId=1");
                request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", result.AccessToken);
                HttpResponseMessage response = await client.SendAsync(request);

                if (response.IsSuccessStatusCode)
                    JsonSerializerSettings settings = new JsonSerializerSettings();
                    String json_string = await response.Content.ReadAsStringAsync();
                    WorkflowInstancesReturnType workflowistancesResponse = JsonConvert.DeserializeObject<WorkflowInstancesReturnType>(json_string);

                    List<IQC_WebClient_Workbench.Models.Contract> contracts = workflowistancesResponse.Contracts;
                    return View(contracts);

Naturally for the sake of simplicity we have hardcoded some parameters within the URLs. The result of the contracts is this:

Very interesting now is the creation of a new contract that must be made (according to official documentation) through a POST call to the relative URL. After creating a simple form for the collection of the text of the Request Message (remember that the constructor in solidity wants the string of the request message) and taking care also that the user with which we are logged belongs to the role of Requestor, we will be able to make a call to create a new contract. Here is the list of my users and their respective roles taken from the dashboard:

Here is the request form:

At this point the final code. In the documentation it is clear that a Json must be prepared in the WorkflowActionInput format with relative properties containing among other things also the RequestMessage. Once serialized and inserted in the payload of the request the answer that is obtained is the ID of the new contract entered (there is an error in the official documentation at this time).

Pay more attention because in the documentation it is reported that some parameters included in the query string are optional, but in reality after receiving some error messages, investigating with Fiddler it turns out that the parameter workflowId is actually mandatory

After fixing the correct URL and received the new contract ID we make an additional request to view the new contract in a new page:

 HttpClient client = new HttpClient();
                client.BaseAddress = new Uri(AzureAdOptions.Settings.WorkbenchBaseAddress);
                HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Post, "/api/v1/contracts?workflowId=1&contractCodeId=1&connectionId=1");
                request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", result.AccessToken);
                request.Content = new StringContent(jsonObject.ToString(), System.Text.Encoding.UTF8, "application/json");
                HttpResponseMessage response = await client.SendAsync(request);

                if (response.IsSuccessStatusCode)

                    JsonSerializerSettings settings = new JsonSerializerSettings();
                    String json_string = await response.Content.ReadAsStringAsync();
                    int newContractID = JsonConvert.DeserializeObject<int>(json_string);

                    HttpRequestMessage newRequest = new HttpRequestMessage(HttpMethod.Get, AzureAdOptions.Settings.WorkbenchBaseAddress + "/api/v1/contracts/" + newContractID);
                    newRequest.Headers.Authorization = new AuthenticationHeaderValue("Bearer", result.AccessToken);
                    HttpResponseMessage newResponse = await client.SendAsync(newRequest);

                    if (newResponse.IsSuccessStatusCode)
                        String new_json_string = await newResponse.Content.ReadAsStringAsync();
                        Contract newContract = JsonConvert.DeserializeObject<Contract>(new_json_string);

                        return RedirectToAction("ContractDetail", newContract);

Here is the result:


In this post we saw how to use the Azure Workbench Blockchain API. The Workbenchclient that can be found in the example codes on Git was not intentionally used because we wanted to highlight in detail the direct calls to the APIs. Surely this technique allows you to build custom applications once you have created smart contract in solidity. We have noticed how, given the extreme simplicity of contract deployment and the management of users and their roles (which makes transparent a series of operations on Ethereum to be carried out to manipulate account addresses), it is initially difficult to disengage from structure of the applications that must follow the system of workflow and finite-state development. For many applications, however, this approach is certainly the best one.

See at the top of this post the link for the shared code.

Happy chain!



Identityserver4 Core e EntityFramework - Parte 1


Ecco uno step by step per implementare un'architettura abbastanza frequente contenente un Identityserver4 per quanto riguarda tutti gli aspetti relativi all'autenticazione e all'autorizzazione, un sito web MVC realizzato con Core e un'applicazione Web API che fornisce i dati al sito (o eventualmente anche a altri client quali Device App). Lo schema viene illustrato di seguito

Come vediamo dall'immagine gli utenti fanno accesso al Sito web utilizzando le credenziali memorizzate all'interno del DB Identity Data. Quando nasce l'esigenza di un accesso ai dati dell'applicazione il sito web agisce come un Client (registrato sul DB Identity Data) che interroga il servizio Rest implementato dal Web Api Core site. Affinchè la chiamata al servizio vada a buon fine il Web Site deve presentare un Token access contenente l'autorizzazione all'accesso al servizio.

Cominciamo con la creazione dell'IdentityServer4. Ci soffermeremo all'inizio ovviamente sulla classe Startup.cs. Nel nostro caso siamo partiti da un progetto preimpostato che troverete come progetto GitHub sotto la cartella Quickstarts. Siccome si è deciso di utilizzare EntityFramework con ApsnetIdentity per mantenere le informazioni relative ai dati di Identity, siamo partiti dal progetto Quickstart8, ma il consiglio è di partire con un progetto nuovo di tipo Asp,Net Core con Authentication 'None' e aggiungere pezzo per pezzo i componenti che servono.

Dopo aver creato il progetto IdentityServer4 di tipo Asp.Net core sarà necessario fornire i pacchetti per la gestione IdentityServer4, EntityFramework e AspnetIdentity. Di seguito i pacchetti da installare tramite nuget:

Adesso si devono effettuare effettuare le operazioni preliminari per la creazione del DataBase per Identity Server. L'approccio è un approccio EntityFramework (Core) Code First. Grazie quindi alle cosiddette migrazioni, partendo dai DbContext si producono dei modelli sulla base dei quali si possono costruire successivamente i relativi DataBase. I DbContext utilizzati sono 3:

  • ApplicationDbContext (Tabelle per la gestione di Aspnet Identity - Users, roles, claims)
  • ConfigurationDbContext (Tabelle di gestione di clients, resources, codes, tokens, consents)
  • PersistedGrantDbContext (Tabelle di persistenza dei Tokens)

Il primo DbContext è molto semplice come vediamo di seguito:

 public class ApplicationDbContext : IdentityDbContext<ApplicationUser>

        public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
            : base(options)
        { }
        protected override void OnModelCreating(ModelBuilder builder)
            // Customize the ASP.NET Identity model and override the defaults if needed.
            // For example, you can rename the ASP.NET Identity table names and more.
            // Add your customizations after calling base.OnModelCreating(builder);

Eredita dalla classe IdentityDbContext che fa riferimento ad una ApplicationUser definita così:

 public class ApplicationUser : IdentityUser

Questo ci permette di inserire eventuali altre proprietà custom alla classe IdentityUser da cui eredita. Gli altri due Context sono definiti all'interno della classe StartUp che vedremo nel successivo paragrafo.


Veniamo adesso alla classe StartUp dove si implementano tutte le varie funzionalità necessarie all'autenticazione e all'autorizzazione. Ecco il codice:

 public class Startup
        public Startup(IConfiguration configuration)
            Configuration = configuration;
        public IConfiguration Configuration { get; }
        public void ConfigureServices(IServiceCollection services)

            const string connectionString = @"Data Source=.\MSSQLLocalDB;database=IdentityDB;trusted_connection=yes;";
            var migrationsAssembly = typeof(Startup).GetTypeInfo().Assembly.GetName().Name;

            // configure identity server store, keys, clients and scopes
            services.AddDbContext<ApplicationDbContext>(options =>

            services.AddIdentity<ApplicationUser, IdentityRole>()

            var builder = services.AddIdentityServer(options =>
                options.Events.RaiseErrorEvents = true;
                options.Events.RaiseInformationEvents = true;
                options.Events.RaiseFailureEvents = true;
                options.Events.RaiseSuccessEvents = true;
               // this adds the config data from DB (clients, resources)
               .AddConfigurationStore(options =>
                   options.ConfigureDbContext = b =>
                           sql => sql.MigrationsAssembly(migrationsAssembly));
               // this adds the operational data from DB (codes, tokens, consents)
               .AddOperationalStore(options =>
                   options.ConfigureDbContext = b =>
                           sql => sql.MigrationsAssembly(migrationsAssembly));

                    // this enables automatic token cleanup. this is optional.
                    options.EnableTokenCleanup = true;
                   options.TokenCleanupInterval = 30; // frequency in seconds to cleanup stale grants. 15 is useful during debugging

Dopo aver aggiunto il middleware di MVC, indichiamo la connectionstring che servirà ad individuare il Database (che ancora non abbiamo creato) che conterrà le tabelle necessarie. Poi si registra lo specifico context come servizio di Identity indicando la connectionstring creata prima. Analogamente si aggiunge il sistema di Identity di default  per gli specifici Users e Roles ed infine si configura Identityserver affinchè utilizzi le implementazioni di Asp,Net Identity per quanto riguarda IUserClaimsPrincipalFactory,
IResourceOwnerPasswordValidator e IProfileService. Come ultima operazione si configura l'implementazione di EF implementation per  IClientStore, IResourceStore e ICorsPolicyService (per le chiamate Cross Domain ai servizi API).

Adesso possiamo creare i modelli usando le Migrazioni da riga di comando e, una volta create le migrazioni, si crea il database tramite il comando (sempre dentro la riga di comando) Update Database. La riga di comando è quella di Packet Manager Console come di seguito illustrato:

Il risultato nel progetto è qualcosa di molto simile a questo:

Come vediamo tutti i nomi delle migrazioni hanno come prefisso il timestamp di creazione. Il Database creato ha questo tipo di formato:

A questo punto non rimane che aggiungere un Controller con la action Index e costruire una view che punta alla descrizione degli endpoint del server

Cliccando su discovery document otteniamo la lista:

Il codice definitivo lo troverete al nostro repository Git