Serverless Image Classification with Oracle Functions and TensorFlow

Image classification is a canonical example used to demonstrate machine learning techniques. This post shows you how to run a TensorFlow-based image classification application on the recently announced cloud service Oracle Functions.

Photo by Franck V. on Unsplash

Oracle Functions

Oracle Functions which is a fully managed, highly scalable, on-demand, function-as-a-service platform built on enterprise-grade Oracle Cloud Infrastructure. It’s a serverless offering that enables you to focus on writing code to meet business needs without worrying about the underlying infrastructure, and get billed only for the resources consumed during the execution. You can deploy your code and call it directly or in response to triggers — Oracle Functions does all the work required to ensure that your application is highly available, scalable, secure, and monitored.

What to Expect

Before we dive into the details, let’s see what you can expect from your serverless machine learning function. After it’s set up and running, you can point the app to images and it will return an estimate of what it thinks the image is, along with the accuracy of the estimate.

Photo by Alan Hardman on Unsplash

The Code

The image classification function is based on an existing TensorFlow example. It leverages the TensorFlow Java SDK, which in turn uses the native C++ implementation using JNI (Java Native Interface).

Function Image Input

The image classification function leverages the Fn Java FDK, which simplifies the process of developing and running Java functions. One of its benefits is that it can seamlessly convert the input sent to your functions into Java objects and types. This includes:

  • Binding JSON data types to POJOs. You can customize this because it’s internally implemented using Jackson.
  • Working with raw inputs, enabled by an abstraction of the raw Fn Java FDK events received or returned by the function.
public class LabelImageFunction { 
public String classify(byte[] image) { ...
Tensor<String> input = Tensors.create(image); ...
}
}

Machine Learning Model

Typically, a machine-learning-based system consists of the following phases:

  • Predicting: The generated model is then used to generate predictions or outputs in response to new inputs based on the facts that were learned during the training phase

Function Metadata

The func.yaml file contains function metadata, including attributes like memory and timeout (for this function, they are 1024 MB and 120 seconds, respectively). This metadata is required because of the (fairly) demanding nature of the image classification algorithm (as opposed to simpler computations).

schema_version: 20180708 
name: classify
version: 0.0.1
runtime: java
memory: 1024
timeout: 120
triggers:
- name: classify
type: http
source: /classify
  • name is the name and tag to which this function is pushed.
  • version represents the current version of the function. When deploying, it is appended to the image as a tag.
  • runtime represents the programming language runtime, which is java in this case.
  • memory (optional) is the maximum memory threshold for this function. If this function exceeds this limit during execution, it’s stopped and an error message is logged.
  • timeout (optional) is the maximum time that a function is allowed to run.
  • triggers (optional) is an array of trigger entities that specify triggers for the function. In this case, we’re using an HTTP trigger.

Function Dockerfile

Oracle Functions uses a set of prebuilt, language-specific Docker images for build and runtime phases. For example, for Java functions, fn-java-fdk-build is used for the build phase and fn-java-fdk is used at runtime.

FROM fnproject/fn-java-fdk-build:jdk9-1.0.75 as build-stage 
WORKDIR /function
ENV MAVEN_OPTS -Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttps.proxyHost= -Dhttps.proxyPort= -Dhttp.nonProxyHosts= -Dmaven.repo.local=/usr/share/maven/ref/repository
ADD pom.xml /function/pom.xml
RUN ["mvn", "package", "dependency:copy-dependencies", "-DincludeScope=runtime", "-DskipTests=true", "-Dmdep.prependGroupId=true", "-DoutputDirectory=target", "--fail-never"]
ADD src /function/src
RUN ["mvn", "package"]
FROM fnproject/fn-java-fdk:jdk9-1.0.75
WORKDIR /function
COPY --from=build-stage /function/target/*.jar /function/app/
CMD ["com.example.fn.HelloFunction::handleRequest"]
  • Copying (using COPY) the function JAR and dependencies to the runtime image
  • Setting the command to be executed (using CMD) when the function container is spawned
FROM fnproject/fn-java-fdk-build:jdk9-1.0.75 as build-stage 
WORKDIR /function
ENV MAVEN_OPTS -Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttps.proxyHost= -Dhttps.proxyPort= -Dhttp.nonProxyHosts= -Dmaven.repo.local=/usr/share/maven/ref/repository ADD pom.xml /function/pom.xml
RUN ["mvn", "package", "dependency:copy-dependencies", "-DincludeScope=runtime", "-DskipTests=true", "-Dmdep.prependGroupId=true", "-DoutputDirectory=target", "--fail-never"]'
ARG TENSORFLOW_VERSION=1.12.0
RUN echo "using tensorflow version " $TENSORFLOW_VERSION RUN curl -LJO https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-$TENSORFLOW_VERSION.jar
RUN curl -LJO https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-linux-x86_64-$TENSORFLOW_VERSION.tar.gz RUN tar -xvzf libtensorflow_jni-cpu-linux-x86_64-$TENSORFLOW_VERSION.tar.gz
ADD src /function/src
RUN ["mvn", "package"]
FROM fnproject/fn-java-fdk:jdk9-1.0.75
ARG TENSORFLOW_VERSION=1.12.0
WORKDIR /function
COPY --from=build-stage /function/libtensorflow_jni.so /function/runtime/lib
COPY --from=build-stage /function/libtensorflow_framework.so /function/runtime/lib
COPY --from=build-stage /function/libtensorflow-$TENSORFLOW_VERSION.jar /function/app/
COPY --from=build-stage /function/target/*.jar /function/app/
CMD ["com.example.fn.LabelImageFunction::classify"]
  • (as part of the second stage of the Docker build) Copies the JNI libraries to /function/runtime/lib and the SDK JAR to /function/app so that they are available to the function at runtime

Deploying to Oracle Functions

As mentioned previously, you can use the open source Fn CLI to deploy to Oracle Functions. Ensure that you have the latest version.

curl -LSs https://raw.githubusercontent.com/fnproject/cli/master/install | sh

Oracle Functions Context

Before using Oracle Functions, you have to configure the Fn Project CLI to connect to your Oracle Cloud Infrastructure tenancy.

api-url: https://functions.us-phoenix-1.oraclecloud.com oracle.compartment-id: <OCI_compartment_OCID> 
oracle.profile: <profile_name_in_OCI_config>
provider: oracle
registry: <OCI_docker_registry>

Oracle Cloud Infrastructure Configuration

The Oracle Cloud Infrastructure configuration file contains information about user credentials and the tenancy OCID. You can create multiple profiles with different values for these entries. Then, you can define the profile to be used by the CLI by using the oracle.profile attribute.

[DEFAULT] 
user=ocid1.user.oc1..exampleuniqueID fingerprint=20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34 key_file=~/.oci/oci_api_key.pem tenancy=ocid1.tenancy.oc1..exampleuniqueID pass_phrase=tops3cr3t region=us-ashburn-1
[ORACLE_FUNCTIONS_USER]
user=ocid1.user.oc1..exampleuniqueID fingerprint=72:00:22:7f:d3:8b:47:a4:58:05:b8:95:84:31:dd:0e key_file=/.oci/admin_key.pem tenancy=ocid1.tenancy.oc1..exampleuniqueID pass_phrase=s3cr3t region=us-phoenix-1
fn use context <context_name>

Create the Application

Start by cloning the contents of the GitHub repository:

git clone https://github.com/abhirockzz/fn-hello-tensorflow
fn create app <app_name> --annotation oracle.com/oci/subnetIds='["<subnet_ocid>"]'
  • <subnet_ocid> is the OCID of the subnet in which to run your function.
fn create app fn-tensorflow-app --annotation oracle.com/oci/subnetIds='["ocid1.subnet.oc1.phx.exampleuniqueID","ocid1.subnet.oc1.phx.exampleuniqueID","ocid1.subnet.oc1.phx.exampleuniqueID"]'

Deploy the Function

After you create the application, you can deploy your function with the following command:

fn deploy --app <app_name>
fn -v deploy --app fn-tensorflow-app
<dependency> 
<groupId>org.tensorflow</groupId> <artifactId>tensorflow</artifactId>
<version>1.11.0</version>
<scope>provided</scope>
</dependency>
fn -v deploy --app fn-tensorflow-app --build-arg TENSORFLOW_VERSION=<version>
fn -v deploy --app fn-tensorflow-app --build-arg TENSORFLOW_VERSION=1.11.0

Time to Classify Images!

As mentioned earlier, the function can accept an image as input and tell you what it is, along with the percentage accuracy.

cat <path to image> | fn invoke fn-tensorflow-app classify
cat /Users/abhishek/manwithhat.jpg | fn invoke fn-tensorflow-app classify
“366 • 9 • Gringo” (CC BY-NC-ND 2.0) by Pragmagraphr
This is a ‘sombrero’ Accuracy — 92%
cat /Users/abhishek/terrier.jpg | fn invoke fn-tensorflow-app classify
“Terrier” (CC BY-NC 2.0) by No_Water
This is a 'West Highland white terrier' Accuracy - 88%

Summary

We just deployed a simple yet fully functional machine learning application in the cloud! Eager to try this out?

Currently working with Kafka, Databases, Azure, Kubernetes and related open source projects | Confluent Community Catalyst (for Kafka)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store