ExifExtractor: Zero to Desktop App in a Day

30 October, 2019

Nathan Trevivian

Zero to Desktop App in a Day

We’ve been working really hard on our Learning Management System here at CameraForensics HQ - trying to make it easier for investigators to do their job using our search tools. A while back we developed a new type of search called BigSearch.

With BigSearch, you can export Exif data and hashes from a ton of images and upload it to CameraForensics.com. All that data gets run through our database, and match data is displayed in a really high-level overview that you can then drill down into, jumping off into the main search to see matching images at any time.

The problem is, getting that Exif data in the first place.

Thankfully, we have really great relationships with other vendors who specialise in case management-type software, and easily produce these data files for their users with little effort. As an alternative to that, we once developed a python script that would drag the data out of the images and put them in a BigSearch friendly file. If neither of those were viable options, we suggested a specific command to run in a terminal that executes Exiftool, which most of these other tools and scripts just wrap anyway.

However, we realised when writing our course material for the LMS that there was still a gap: how do people really easily extract Exif data into a BigSearch friendly file in a totally frictionless way? A way that doesn’t require licenses for other software, or for the investigator to be totally comfortable on the command line?

To answer this question we decided to re-life an old idea: a desktop application for Windows, macOS and Linux that enables people to import lots of images and quickly produce an Exif extract file that also contained hashes of the images.

This was an old idea

Once upon a time we wrote a desktop application called PhotoForensics using node-webkit, which had just renamed itself to NW.js. The frontend was mostly hand-rolled vanilla javascript, with some help from Prototype.js. At the time there was a lot of competition between frontend frameworks - ReactJS had only just released a stable version, and we weren’t ready to dip our toes in yet.

The app met MVP requirements, but was very slow at calculating hashes (we were also trying to squeeze in PDNA at that point) and a nightmare to package up into installers that didn’t fire off people’s virus checkers. It didn’t really take off with our users, nor became a big enough priority for us to continue building upon, and so was unintentionally mothballed.

Since that time, Electron (sprung from GitHub in the form of AtomShell and developed from there) has gained a massive following, and some really useful node modules have been developed, making writing a desktop application seem too easy for belief. NW.js, backed by Intel, still has it’s fans, but Electron seems to be making more waves in this particular pond.

ReactJS is now (at the time of writing) on v16.9, and we’ve re-written our main product UI in React, so we’re more comfortable with it now. When I was given the task, therefore, I decided to opt for Electron + ReactJS to create our desktop application.

There are loads of tutorials on how to create electron apps using ReactJS on the frontend. This blog post is not one of those. Instead I want to pick out the mountain top and dark-valley experiences of creating, signing and distributing an electron app that we called ExifExtractor.

Electron + ReactJS = A winning combo (sponsored heavily by npm)

As the title of this blog suggests, writing the main application literally took one day. The phrase bandied around the office was “on the shoulders of giants”, because that’s where we were. Creating the application is as simple as running create-react-app. Then all you have to do is install the electron npm package and create a js file that becomes the main execution process. Boilerplate: done.

To make installers, install electron-packager or electron-builder, and add a few properties to the package.json file.
To handle publishing your application to a privately managed repo, install electron-publisher and add a few properties to the package.json file.

Are you picking up the pattern here? It seems that every step in the process of generating, publishing, installing and auto-updating a cross-platform desktop app is already covered by existing npm modules - it’s all too easy.

Once upon a time developing desktop applications was a very real effort. You needed people who specialised in Windows Presentation Foundation, or Swing, or writing Objective-C applications etc. Not anymore. All you need to know is javascript - a thought that may terrify some people!

The cherry on the cake was that, with a single command on our chosen development platform, we could build, package, sign, notarize and release our application. This was really important for us. PhotoForensics was horrendous to release. It involved running up a VirtualBox instance of Windows, moving files around, running commands - nothing that could be automated easily. One of the requirements for ExifExtractor was that it could be packaged and released in one command (ideally by our CI server), and that’s what we almost achieved.

Notice that “almost”? It wasn’t all plain sailing. We did hit some serious clunks when it came to signing and notarizing the application for hassle-free installation. I’ll go over our pain points later, but first let’s start with something positive.

I ❤️ IPC

Before I started getting my hands dirty on writing the React app I was sure that I’d need a REST API on the backend - probably served by expressjs to respond to all of the frontend needs. How wrong I was. The Inter Process Communication (IPC) system used by electron apps makes it way easier than that. You don’t need REST at all.

With IPC, the main process, “ipcMain” communicates with the front end renderers - AKA your React app, or “ipcRenderers”, using event-like communication channels. When the front end needs something from the backend, it sends an event with any applicable data. It then listens to events from the backend, which will respond in turn.

The beauty of this is that you don’t have to pass your callbacks around your React app. You can have one component fire an event to the backend, and then have a completely different component handle the response without any properties or contexts being shared between those two components.

Let’s take a look at a simple example from ExifExtractor: File selection.

When selecting files to import we have a Button in our App.js that fires off the CHOOSE_FILES event:

  chooseFiles = () => {
    ipcRenderer.send(channels.CHOOSE_FILES);
  }
  render() {
    const { appName } = this.state;

    return (
      ...
        <Button 
            className="choose-files--button" 
            onClick={this.chooseFiles}>Add Files</Button>
      ...
    );
  }

Which is handled by the backend:

  const onFilesSelected = (sender, result) => {
    if (!result.canceled) {
      sender.send(channels.FILES_SELECTED, {
        files: result.filePaths.map(file => ({ filePath: file }))
      });
  
      log.info("Processing " + result.filePaths + " files...");
      processImages(result.filePaths).then(fileData => {
        sender.send(channels.FILES_PROCESSED, {
          files: fileData
        });
      });  
    }
  }
  
  ipcMain.on(channels.CHOOSE_FILES, event => {
    dialog.showOpenDialog({ 
        title: "Choose Images", 
        message: "Choose images (jpg only) to import", 
        filters: [{ 
            name: 'JPEGs', 
            extensions: ['jpg', 'jpeg'] }], 
        properties: ['openFile', 'multiSelections'] 
    })
    .then(result => {
      log.info(result);
      onFilesSelected(event.sender, result);    
    });
  });
The dialog.showOpenDialog function throws up a native open dialog - much better than using the FileReader API in this case, and it’s all built in as part of the electron API.

The response is then sent back through IPC to a totally different component - FileList.js:

  componentDidMount() {
    ipcRenderer.on(channels.FILES_SELECTED, (event, arg) => {
      const { files } = arg;
 
      this.setState({
        files: _.uniqBy(
            [].concat(this.state.files).concat(files), 
            d => ( d.filePath )
          )
      });
    });
 
    ipcRenderer.on(channels.FILES_PROCESSED, (event, arg) => {
      const { files } = arg;
      const updatedFiles = this.state.files.map(file => {
        const updatedFile = files.filter(incoming => {
          return incoming.filePath === file.filePath;
        });
        if (updatedFile.length) {
          return updatedFile[0];
        }
        return file;
      });
      this.setState({ files: updatedFiles });
    });
  }

You may have noticed that the onFilesSelected function accepts a sender. This is because we can also hook up our native application menu option to fire the same onFilesSelected function:

  const menuTemplate = [
  ...
    {
      label: "File",
      submenu: [
        {
          label: "Import...",
          accelerator: "CmdOrCtrl+I",
          click: () => {
            dialog
              .showOpenDialog({
                title: "Choose Images",
                message: "Choose images (jpg only) to import",
                filters: [{ name: "JPEGs", extensions: ["jpg", "jpeg"] }],
                properties: ["openFile", "multiSelections"]
              })
              .then(result => {
                log.info(result);
                onFilesSelected(mainWindow.webContents, result);
              });
          }
        }
      ]
    }
  
  ...
  ]
  
  const menu = Menu.buildFromTemplate(menuTemplate)
  Menu.setApplicationMenu(menu)
 

When executed from the native menu the sender is mainWindow.webContents, and when executed in response to an ipc event, the sender is the event.sender. As you can see, it doesn’t matter how the event was initially fired. All FileList needs to know is when new files are added - it doesn’t care who/what requested them.

Also, as a side note - notice how easy it was to populate the native application menu and respond to menu selections! So easy!

So much 😩

Now, This is where we must admit that the title of this blog post is a little bit misleading. Whilst it only really took a day to write the first MVP ExifExtractor, installers and all, it took few more weeks just to get the signing and notarizing working.

Signing for Windows had very different pain points than signing for macOS. Let’s look at each in turn.

Signing for Mac

The first hurdle was registering with the Apple Developer Program, which meant filling out a web form, paying $99 (to be renewed annually) and fielding no less than three phone calls from Apple.

The second hurdle was working out what certificates were needed. Windows requires code signing certificates from well known providers (such as DigiCert, Sectigo - formerly Comodo etc), but those won’t work for MacOS signing. MacOS requires certificates generated by an awkward process involving full-fat XCode (not just xcode command line tools). Generating certs from Keychain appeared to work, but signing and/or notarizing the app with them always failed.

We had to create entitlement plist files for MacOS distributions, populated with the correct keys and values. It took some serious investigation to determine which keys and values were expected. It turns out that there is no “Electron App Signing for Dummies” page anywhere, so there was a lot of digging around and trial-and-error.

On top of signing the application, we also had to produce a script to notarize the packaged application. The application must be signed if you want the auto updating feature to work (this is true for Windows as well as macOS), or if you want to distribute it on the Mac App Store.

Signing for Windows

First, some background and warnings

For Windows, signing your application means that it gets attributed a positive trust value - or Windows SmartScreen Application Reputation score as it is otherwise known. Once you have a positive trust value, or SmartScreen Application Reputation score, users are no longer warned that your installer could be a virus.

In order to sign our Windows installer and application, we had to decide between an EV Code Signing certificate or a standard Code Signing certificate.

The standard certificate is cheaper, but the first few hundred users still receive virus warnings. Once enough people have ignored the warnings and clicked the Run anyway button, the warning goes away. That is, until you release a new version. The trust score relates to a specific file. Once the file changes (eg: minor release), it returns to 0 and you have to start again with gaining trust.

The EV Code Signing certificate means that your application gets immediate trust, and no users see the virus warnings. However, EV Code Signing Certificates are bound to a hardware device called an eToken (usually a USB thumb drive) and your application cannot be signed using the certificate without access to the eToken. Using this would mean kissing goodbye to the idea of signing and releasing from our cloud-based CI server. It’s pretty hard to plug a USB thumb drive into an EC2 instance, for example.

Now for the pain points - ah! There were so many!

Firstly, the process of getting an EV Certificate is long and arduous. The issuing organisation has to do a lot of due diligence on you/your company. We had to fill out and sign several forms, and get them sent off to the issuing authority. Then we had to wait several weeks before the eToken turned up. A couple of times we had to chase them up, and a couple of times they confessed that they had forgotten to send us yet another form.

When the eToken finally arrived there were no useful instructions or any guidance on how to use it. There was documentation, but none of it helpful to our cause. The recommended software to access it (something called SafeNet) crashed when started up from the Applications shortcut it created after install.

The issuer sent us an email shortly after delivery with a token password and instructions to change the password. It turned out that this was impossible without an administrator’s password, which they don’t issue for security reasons. So you have to raise a ticket on their support desk, teleconference with their support agent, who then takes control of your computer in order to enter the administrator’s password so you can then change the token password.

Even the electron-builder documentation wasn’t that helpful when it came to using the eToken to sign the application, though it did get us a long way there. We were back to throwing search terms and Google and pouring over GitHub issues to see what other users had done. For example, the hardwareToken.cfg file required was slightly different than the one documented on the electron-builder code signing instructions:

  name = eToken
  library = /Library/Frameworks/eToken.framework/Versions/A/libeToken.dylib
  slotListIndex = 0

It probably differs depending on what sort of token you have, which will depend on where you obtained it.

We were hoping to release from *nix systems, and so opted to use jsign to sign our Windows application and installer, as it appeared to be a one-liner with no additional required installs. Unfortunately it regularly crashes horribly (although we believe this was related to the version of macOS we were running). It seems as though it’s just a case of continually trying it until it works.

The jsign command required the alias of the certificate. It doesn’t tell you what the alias is in SafeNet, and when you export the certificate and install it in your keychain it doesn’t really make it clear there either. After more digging we worked out a keytool command that finally worked in listing certificate aliases from an eToken:

  keytool -list \
  -keystore NONE \
  -storetype PKCS11 \
  -providerClass sun.security.pkcs11.SunPKCS11 \
  -providerArg "./hardwareToken.cfg"

Unfortunately even the output of that doesn’t make it clear what the actual alias is. The output was:

  Keystore type: PKCS11
  Keystore provider: SunPKCS11-eToken
  
  Your keystore contains 1 entry
  
  SafeNet eToken 5110:ALIAS-REDACTED, PrivateKeyEntry, 
  Certificate fingerprint (SHA1): SHA-1-REDACTED

We were hoping for something like:

  Alias: ALIAS HERE

But no joy. We had to try the signing process a couple of times with different values as alias before getting it right.

During the signing process, we made a mistake when making the eToken password an environment variable instead of having it hard-coded. Because the signing process signs several files, and because the signing command has to be in a retry loop due to regular crashes, it attempted signing lots of times, got the password wrong lots of times and locked the eToken. 😭 Fortunately the issuer was able to unlock it for us (over a zoom session), otherwise we’d have been paying for a new one. 😅

Despite all my moaning, electron-builder does a really great job of providing hooks for you to specify notarization and signing-for-Windows scripts. Signing for macOS is covered by electron-osx-sign, which is wrapped up in electron-builder. So once you’ve figured out the magic formula, you realise that it’s actually quite simple.

Conclusions

  • Electron + ReactJS is awesome.
  • Open source contributors to packages like electron-builder, electron-publisher, electron-updater, electron-osx-sign etc are awesome and deserve medals or something.
  • Certificate issuers need to seriously up their game.
  • The world is crying out for an EV Certificate that you can access from a cloud-based CI server.
  • There really does need to be a “Signing Electron Apps for Dummies” blog post.

Don’t be afraid - go forth, create electron desktop apps, and have fun doing it.

ExifExtractor is currently available for download from our BigSearch page.