Security Analysis of Etheruem Smart Contracts with Mythril

Mythril is an open-source security analysis tool for EVM bytecode, courtesy of ConsenSys. It is also a component of their Security Analysis Service – Mythx. Mythril detects security vulnerabilities in smart contracts built for Ethereum and other EVM-compatible blockchains.

Vulnerabilities found by Mythril are reported with reference to the weaknesses listed on the Smart Contract Weakness Classification Registry (SWC Registry). I will use two entries from SWC Registry for the examples in this article:

  • SWC-106 – Due to missing or insufficient access controls, malicious parties can self-destruct the contract.
  • SWC-107 – One of the major dangers of calling external contracts is that they can take over the control flow. In the reentrancy attack (a.k.a. recursive call attack), a malicious contract calls back into the calling contract before the first invocation of the function is finished.

Install Mythril on Windows

> docker import mythril/myth

Get test files from github

Source code for these tests is on github : mythril-tests. Clone the repo locally and adjust the paths in the commands below to match your local environment.

Analyze a local smart contract

Analysis of SelfDestructMultiTxFeasible.sol

> docker run -v E:\share\:/data mythril/myth -v4 analyze /data/mythx-tests\05222022-25/SelfDestructMultiTxFeasible.sol

Mythril reports an instance of SWC-106 vulnerability:

Analysis of SimpleDAO.sol

> docker run -v E:\share\:/data mythril/myth -v4 analyze /data/mythx-tests\05222022-25/SimpleDAO.sol

Mythril reports three instances of SWC-107 and one instance of SWC-105:

Analysis of a flatenned contract file

File containing the two test contracts returns five instances of vulnerabilities of both contracts:

> docker run -v E:\share\:/data mythril/myth -v4 analyze /data/mythx-tests\05222022-25/flatenned-01.sol

Analyze a contract with imported contract

Most smart contracts import other contracts to reuse functionality. You do not have to flatten the contracts into one file. Mythril can work with contracts with imports specified in them : SimpleDAOWithImport.sol

> docker run -v E:\share\:/data mythril/myth -v4 analyze /data/mythx-tests\05222022-26/SimpleDAOWithImport.sol

Analyze a contract with @OpenZeppelin style import

Mythril relies on solc for compiling contract source code. For @OpenZeppelin style imports, you have to specify –solc-json file containing remapping for solc to locate the referenced files : SimpleDAOWith-OzImport.sol

> docker run -v E:\share\:/data mythril/myth -v4 analyze /data/mythx-tests\05222022-26/SimpleDAOWith-OzImport.sol –solc-json=/data/solc-args.json

Analyzing On-Chain Contracts

Mythril can analyze contracts deployed on the blockchain directly. You do not need source code of the contract. Support for infura is built-in, you can also use custom RPC endpoint. Replace INFURA_ID with your Infura project id and CONTACT_ADDRESS with the address of your contract on the blockchain :

> docker run mythril/myth -v4 analyze –rpc infura-rinkeby –infura-id INFURA_ID -a CONTACT_ADDRESS

KeyNode with Node.js and Microsoft Azure

KeyNode is a application to issue and verify software license keys. Technology stack for KeyNode is Node.js, MongoDB and Microsoft Azure.

I had built this functionality with (a cloud-based IDE with a built-in source code repository and debugger), mongohq (MongoDB as a service – now part of and appfog (Cloud PAAS built on top of CloudFoundry). It used SMTP/gmail to email license files. That was the version I created a couple of years ago to issue tamper-proof signed xml license files for CodeDemo (a code snippet tool for developers, presenters and instructors).

For KeyNode (open source) I switched to a different toolset : Visual Studio Code and Windows Azure, simplified the code to remove signed xml file and open-sourced it on GitHub. Signed xml allowed offline verification in CodeDemo (a Wpf/Desktop app). Removing signed xml requires verification to happen online. I am working on adding the web endpoint for verification of license keys. This version uses SendGrid to email license keys. KeyNode is deployed as a Windows Azure Web App. The Azure Web App is on Continuous Deployment feed from the source code repository on GitHub.

I created and tested this Node.js application locally without IIS and deployed it as an Azure Web App without making any changes to the code at all. Node.js applications are hosted in Azure under IIS with iisnode. Iisnode is a native IIS module that allows hosting of node.js applications in IIS on Windows. Read more about iisnode here. Iisnode architecture also makes it significantly easier to take advantages of scalability afforded by Azure.

KeyNode is a work in progress. My plan is to use this as the basis for further explorations in the following areas :

  • DevOps, Docker and Microservices (at miniature scale of course!)
  • Create a Web UI with Express (a Node.js web application framework)
  • Integrate with Azure Storage/Queues
  • and more…

I invite you to check out the live site on Azure and fork it for your own experiments : KeyNode on GitHub.

Resources :

Photo Credit : Piano Keyboard (

Using Powershell with Splunk

Splunk Powershell Resource Kit is a convenient and very capable wrapper over Splunk REST API. You can use the Powershell commandlets exposed by this resource kit to deploy, check and manage splunk services as well as execute splunk searches. In this post, you will be introduced to the Splunk Powershell Resource Kit, you will learn how to use powershell commandlets to connect to a splunk instance and execute searches.

1. First, you will need to download the resource kit from github.
2. Installation is very simple. All you have to do is download and extract the files from the zip archive and double click on install.bat to install the splunk powershell module.
3. Open Windows Powershell console from Windows Start menu.
4. Verify that the Splunk module is installed by executing Get-Module commandlet.

Get-Module SplunkSearch

5. Import splunk resource kit commandlets using Import-Module command.

Import-Module -Name Splunk

6. Next, you need to use Get-Credentials and then Connect-Splunk commandlets to connect to splunk. You need to do this once per session or if you need to switch to a different splunk instance.

$credential = Get-Credential
Connect-Splunk -Credential $credential -ComputerName localhost

I have a local Splunk Enterprise instance running on my machine, so I am using localhost as the ComputerName to connect to it. If you have a SplunkCloud subscription you can use as the ComputerName to connect to your subscription. Like so –

Connect-Splunk -Credential $credential -ComputerName

7. Next, use Search-Splunk commandlet to execute searches –

Search-Splunk -Search "Error"

Here is a sample script:

$lastDay = ( get-date ).addDays( -1 ).toString( ‘s’ )

$searches = @(
    , "source=""*"" ERROR"
    , "CreditDoesNotMatch"
    ,"source=""\\www3/access.log"" productId=WC-SH-G04"

Write-Output $lastDay
foreach($search in $searches)
    $qry = $search + " | stats count"
    Write-Output $qry
    Search-Splunk -Search $qry -StartTime $lastDay | Select-Object -ExpandProperty Count

The sample script executes multiple Splunk searches and outputs the count of results matching these search queries. Note that I am “-StartTime” parameter to scope the search to a narrower time window and “stats count” command to get the count of results. You can get this sample script as a github gist.

You can also use -EndTime and –MaxReturnCount to further constrain the query results and Format-List, Format-Table, Format-Wide commands to format the results. You can learn more about other Search parameters as well as other capabilities exposed in the resource kit documentation.

Resources :

  1. Splunk Powershell ResourceKit
  2. Splunk Powershell ResourceKit on github
  3. Splunk Powershell ResourceKit Documentation

Image credit : terminal by Andrea Mazzini from the Noun Project

The Site44 Workflow

A light weight development workflow with real-time website deployment.

I recently built a sample website to illustrate how clean, semantic html markup can be maintained when using Bootstrap’s grid system. The solution is to use a css pre-processor to incorporate Bootstrap’s LESS based mixins into your own .less files and push the Bootstrap instructions down into your stylesheets. There are two ways to “compile” .less stylesheets – use a stand-alone LESS compiler or use less.js. I found it very convenient to use less.js (note that it is not recommended in production deployment). As I started working on developing the sample code I found it a bit cumbersome to work with an entire web application project in Visual Studio, considering I was working with some really simple sample client-side html, css. As I craved for an alternative, I stumbled on to a development workflow that is incredibly simple and a lot of fun. I call it the Site44 workflow. Site44 turns your dropbox folders into websites. And it is awesome! Here is what you do –

1. Sign into using your dropbox credentials.

2. Create a new site (all you have to do is come up with a name). I named it “ash”. A sub-folder with this name will show up in your dropbox folder.

3. Drag this folder to your Github for Windows screen and drop it there to create a github repo in that folder and push it to github.

4. Smile and write code.

As you save your code. The changes are deployed in real-time to your website. You commit to your github repo as you please. If you revert to a different version/branch of our code from your git repo, that version will be deployed (almost) instantly to your website. I wish there was a .site44ignore feature in Site44, just like .gitignore. That will allow me to keep my .git folder (and some other files) from getting published to the website. Other than that, this worked out really well for me.

I wrote about the experience of extending Bootstrap with LESS here : Bootstrap with LESS.

Hat tip to Justin Saraceno for introducing me to site44.


Understanding and Using System.Transactions

These are some resources to help grasp System.Transactions functionality and use it effectively in your projects:

Features Provided by System.Transactions
Implementing an Implicit Transaction using Transaction Scope
MSDN Articles by John Papa
ADO.NET and System.Transactions
Revisiting System.Transactions
These are specific to TransactionScope (the way to go in most cases)
Here is a practical example of Using TransactionScope :
This one gets in depth with the way a TransactionScope like functionality can be implemented. Gives you a good understanding of what is happening under the hood when using TransactionScope in some Repository implementations in multi-threaded scenarios.
Here is another example of implementing a transactional repository :
There are excellent tips here about configuring TransactionScope when used with SQL Server:
This is a good resource for CommitableTransaction usage –
And this one has brilliant under-the-hood coverage –

Solution to the fetch puzzle

Here is a brute force solution to the fetch problem –
Basically, at each step there are three possibilities :
1. You can fill a bucket.
2. You can transfer water from one bucket to the other one.
3. You can dump out the water from a bucket.
In this brute force solution, I try each one of these steps and then try all three again after each one of the previous steps. And on
and on untill I get the required amount of water in one of the buckets.
Check it out. Source code is on my github repo –

Here is a brute force solution to the fetch puzzle.

The puzzle goes like this – You have two buckets. A 3 gallon bucket and a 5 gallon bucket. Buckets are not marked or graduated. You are to fetch 4 gallons of water in a single trip to the river. How will you do it?

Basically, at each step there are three possibilities :

  1. You can fill a bucket.
  2. You can transfer water from one bucket to the other one.
  3. You can dump out the water from a bucket.

In this brute force solution, I try each one of these steps and then try all three again after each one of the previous steps. And on and on until I get the required amount of water in one of the buckets.

Check it out. Source code is on my github repo –