Posted on Leave a comment

Plug-In Generation to Extend Salesforce CLI

When any team at Salesforce builds a new feature, we remind ourselves that every customer is unique. That is why we build features to be customizable and extendable. Salesforce CLI is no different: We know that as a developer your workflow is unique, and although Salesforce CLI comes with a great set of commands right out of the box, you might need something more. That is why the Salesforce CLI team worked closely with the Heroku team to give you the ability to create your own custom commands.

We call this Plug-In Generation and it is built on top of Heroku’s Open CLI Framework (oclif). When you run the Salesforce CLI command sfdx plugins:generate, the Salesforce CLI plug-in wizard leads you through the configuration steps for your new command and generates a plug-in project. You start with a scaffolded command, basically a fully functional “Hello World” command, and the project’s file structure so you don’t need to start from scratch.

Today, Plug-In Generation is generally available and we are open sourcing the Core and Command Libraries to allow you to dig into the code and if there is a functionality you wish to see, contribute back the repository.

What are we announcing today?

  1. Plug-In Generation is GA: We provide the scaffolding to get you going so you can build whatever you want. Plus, when you generate a new plug-in, there is an example command in the project folder you can dig into.
  2. A plug-in developer guide: The Salesforce CLI Plug-In Developer Guide walks you through the process of building a plug-in with examples so you can get going now.
  3. Open-sourcing the CLI Core and Command libraries:
    1. The Core Library is a NodeJS module that contains all the “core” utilities of the Salesforce CLI infrastructure. These utilities include authentication, config, project management, and much more. Your plug-in can use the Core library to access information in the same way the force:commands do. Explore the Examples folder in the repo, because you can do more than just Plug-In Generation with the Core Library.
    2. The Command Library gives your plug-in access to flag parsing, argument parsing and validation, and help: all the things that a good command should have. Using the Command library allows you to make your command consistent with other SFDX commands and to be user-friendly.

What can I build?

If you already have custom scripts that you use with Salesforce CLI, revisit those scripts to determine if it makes sense to create a custom plug-in instead. Whenever you want to perform repetitive tasks and have to support multiple operating systems or work across multiple DX repositories a custom command or set of commands may make your team’s workflow better.

For example, let’s say you have a custom linting program that you want your team to run every time they push to a scratch org. You can create a plug-in that does just that.

Another example of a command that you can create might be to post to Chatter when you deploy your code to a sandbox or create a new package version. In this case you could create a command that connects to your Salesforce Chatter feed and then add that command to your build or automation scripts. The possibilities are only constrained by your imagination!

How do I get started?

If you are the type that wants to dive right in, run the ‘sfdx plugins:generate’ command and start exploring scaffolded project. If you are the type that wants to do get the lay of the land first, start by reviewing the Salesforce CLI Plug-In Developer Guide. This will help you get a feel for what you need to start building and how to get started. Then, go review the platform event streamer plug-in to see a plug-in in action (you can even watch this Dreamforce talk about how it was made). Review how the plug-in works and how it incorporates the Core and Command libraries.

What’s coming?

We are excited to share this with you now, but we also have more that we plan to build to make Plug-In Generation even more useful. We are currently building a community signing feature so you can sign your plug-ins. After that is available, we plan to create commands in the CLI that allow Salesforce CLI users to search for plug-ins and get suggestions directly from the CLI. That way you can share your plug-in with the Salesforce Developer community.

We are excited to see what you build, so don’t forget to share your creations with us on social media #sfdx.

To learn more about Salesforce CLI, check out the Trailhead Quick Start on Salesforce DX.

About the author

Claire Bianchi is the Product Manager for the Salesforce CLI. She joined Salesforce in June 2018. Prior to this role she was at Atlassian as the Product Manager of the Bitbucket Ecosystem and Front End teams. She holds two degrees from the University of California, Berkeley, a BA in Economics and an MBA from the Haas School of Business. Prior to grad school, she was in database marketing at Hotwire.com and ran CRM at UniversityNow. Follow Claire on Trailhead, LinkedIn, and Twitter.

Posted on Leave a comment

Salesforce Developers Succeed Together in the Trailblazer Community

You’ve found our blog and website, you probably have a Developer Edition org. But have you been to a Developer Group meetup in real life?

Whether you’ve been working with the Salesforce platform since the dawn of Apex (like me), you’ve just switched from another platform (welcome!), or you’re building your dev skills from scratch (keep on learning!), I want you to know there’s something different here, and it’s not just the platform.

The Salesforce Platform has world-class technology. It’s metadata-driven and multi-tenant. It’s trusted, fast, easy, and smart. In short, it’s groovy. But there’s a lot of cool technology out there, and the same can be said for other platforms. When I talk to Salesforce Developers, what I hear most is that they love the tech, but the key differentiator is the community. I also hear that from Bret Taylor, Salesforce President and Chief Product Officer, and co-creator of Google Maps and the Google Maps API, among other things.

 

So, if you haven’t yet stepped foot into a Developer Group meeting or joined a Developer Group meetup virtually, you’re missing out!

What’s a Developer Group?

Developer Groups in the Trailblazer Community are one of the fastest ways to learn best practices and get inspired by learning with and from your peers. There are hundreds of active Trailblazer Community Groups all over the world. These volunteer-led groups typically meet monthly to learn with and from each other (and Salesforce employees) and to inspire each other to achieve great things in their careers, companies, and communities.

Take a look on TrailblazerCommunityGroups.com to find in-person or virtual groups that interest you, and register for the next meeting!

Don’t take it from me — check out what our Developer Group members have to say:

Jigar Shah

Melissa Hansen

Philip Southern

Sue Maass

Adam Olshansky


 

Ready to get inspired? Go to TrailblazerCommunityGroups.com and I’ll see you soon!

Awesome resources

Mary’s Favorite In-Person DG: New York City, NY Developers Group
Mary’s Favorite Virtual DG: Women in Tech Developers Group
Trailhead Module: Trailblazer Community Groups

About the author

Mary Scotton is a VP of Developer Evangelism at Salesforce and has been blazing trails with the Salesforce Platform since 2005. Originally a member of the Platform Product Management team, she led the development of many of the point & click app development tools that are still used today. She is passionate about educating, enabling, and having fun with the awesome Salesforce Admin and Developer community. You can find her on Twitter @rockchick322004.

Posted on Leave a comment

Hands-On with Financial Services Cloud: Tying It All Together

This is the third and final part of the our blog series where we will cover some of the best practices for writing code that extends Financial Services Cloud and adheres to Salesforce’s security recommendations. We will also review some of the packaging nuances when building on top of Financial Services Cloud.

Before diving in here, you may want a refresher on parts 1 and 2 of this series.

Let’s start with the development best practices.

Writing secure code

While building managed packages it is important to implement secure coding practices to ensure that your package passes the Salesforce security review . Salesforce has published an Enterprise Security API library (ESAPI) to provide a convenient and easy mechanism to inject security best practices into your programmatic constructs . The source code published for the Drivewealth project referenced in the second part of this blog series makes use of this library extensively.

Just to refresh your memory, here is the sequence diagram that explains the high-level flow of the project.

The getDWAccount method of the DriveWealth.cls instantiates a util class and accesses the static methods to check for access control for the respective objects and fields within the object before returning the Account information.

public static DW_Account__c getDWAccount(Id DWAccountID){

        //CRUD/FLS check
        //Chek for DW Account
        Util.AccessController.assertAuthorizedToView(
                Schema.DW_Account__c.getSobjectType(),
                new List<Schema.SobjectField>{
                        Schema.DW_Account__c.fields.Name, Schema.DW_Account__c.fields.Account__c,
                        Schema.DW_Account__c.fields.Account_ID__c, Schema.DW_Account__c.fields.Account_No__c,
                        Schema.DW_Account__c.fields.Account_Type__c
                }
        );
        //Check for account as we also need it for that
        Util.AccessController.assertAuthorizedToView(
                Schema.Account.getSobjectType(),
                new List<Schema.SobjectField>{
                        Schema.Account.fields.Name, Schema.Account.fields.DW_User_ID__c,
                        Schema.Account.fields.DW_Username__c, Schema.Account.fields.DW_Password__c
                }
        );

This Util Class returns the SFDCAccessController object from the ESAPI library.

public with sharing class Util {
    public static Boolean debug = false;
    public static void log(Object message) {
        if(debug == true) {
            System.debug(message);
        }
    }
    /**
     * @return the current ESAPI SFDCAccessController object being used to maintain the access control rules for this application.
     */
    public static SFDCAccessController AccessController {
        get {
            if (AccessController == null) {
                AccessController = new SFDCAccessController();
            }
            return AccessController;
        } private set;
    }
}

The SFDCAccessController class also includes other methods (e.g. isAuthorizedToView) that enforce CRUD and FLS checks, which you can invoke from your package.

public boolean isAuthorizedToView(Schema.SObjectType someType, List<String> fieldNames) {
        return checkAuthorizedToView(someType, fieldNames, false);
    }
    
    public boolean isAuthorizedToUpdate(Schema.SObjectType someType, List<String> fieldNames) {
        return checkAuthorizedToUpdate(someType, fieldNames, false);
    }
   

Also note that the queries in the DataQuery class, which consolidates the queries used in the project, uses static text with variable substitution, thereby minimizing vulnerability to a SOQL Injection attack.

        return [select ID,Name,Account__c,Account_ID__c, Account_No__c, Account_Type__c, Account__r.DW_User_ID__c,
                        Account__r.DW_Username__c, Account__r.dW_Password__c
                    from DW_Account__c WHERE Id=:DWAccountID];
    }

This is only a sampling of the secure coding guidelines that Salesforce’s Security Review team will be looking for as they review your application. The coverage of the entire set of guidelines is outside the scope of this article.

Deploying the project using Salesforce DX

You can use Salesforce DX to deploy the sample project in this blog series to a scratch org or any other development org. The project’s Github readme files provides instructions and videos to use SalesforceDX with this project. This includes a one click deployment of the project to any of your Financial Services Cloud developer orgs that you choose.

Packaging and upgrades

Financial Services Cloud is a managed package that is built on top of Sales and Service Cloud. When you are creating your packages you are creating extensions for the Financial Services managed package. The packaging technology for your extensions to Financial Services Cloud follow the same methodology as you would with any of your managed packages.

Of course, there are still some nuances that AppExchange partners need to consider when building extensions of Financial Services Cloud.

  • Financial Services Cloud is not available in prerelease orgs that partners usually use for testing in advance of a Salesforce update. Make sure that you have sandboxes as a part of the release upgrade cycle which will give you access to an environment for regression testing of your app with the new version of Financial Services Cloud.
  • Financial Services Cloud is a managed package and has custom triggers that execute “with sharing”. Be mindful of the impact that your application’s DML might have in transaction contexts. For example, if your package contains code that is expected to execute “without sharing,” the DML in your package could invoke the Financial Services Cloud triggers. These triggers will respect the sharing rules of the running user and might roll back your transaction if the user does not have access to the records.

Person Accounts in Financial Services Cloud

As of Spring ’18, Financial Services Cloud supports Person Accounts. The Financial Services Cloud team encourages all partners and customers to support Person Accounts.

Enabling Person Accounts and referencing the specific fields creates a dependency in your managed package. Make sure that you build your application to work in Financial Services Cloud orgs with and without Person Accounts enabled. Refer to this blog which provides more insight into creating applications that support Person Accounts in Financial Services Cloud.

Financial Services customer trial org

Customers can now get access to a free 30-day preconfigured Financial Services org which comes seeded with sample data. Refer to the Free Trial page for details. Partners can also use the trial Enterprise Edition org. These orgs expire in 30 days

Conclusion

This three-part series provided you an overview of Financial Services Cloud and some insight into the best practices for building secure applications on Financial Services Cloud. Financial Services Cloud is one of the fastest growing products in the Salesforce ecosystem and partners can build new applications or port their horizontal or vertical applications on this industry cloud.

Resources

Posted on Leave a comment

Modular App Dev: Your Questions Answered

On our recent webinar Modern App Dev: Modular Development Strategies, we saw lots of great questions come in from our (wonderful and super sharp) attendees. We’re tackling the most common ones (that we didn’t get to during the webinar) here.

What tools were you all using?

We showed you a few different tools:

  • VS Code (the IDE we both used)
  • Salesforce CLI
  • Salesforce Extensions for VS Code (find installation instructions here)
    • If you scroll down on the Marketplace Extensions site to the Documentation for Included Extensions section you can dig into the different extensions that are part of the extension bundle and their corresponding documentation. You can also check out the GitHub repository of the extensions, as it is an open-sourced project.

If you’re new to these tools, make sure to check out the resources at the end of this post.

Do we have to have use Salesforce DX or enable anything to get started? Can we just use sandboxes?

As we mentioned, Salesforce DX itself includes lots of tools and new features to help you build and deliver Salesforce apps. The Salesforce CLI, sandboxes, scratch orgs, Change Sets and unlocked packages could all be considered part of Salesforce DX.

So if you’re building and delivering apps on Salesforce (and not just building directly in production!), you’re already using pieces of Salesforce DX. But if you want to start building your apps and deploying changes in more modular, focused ways — yes, you’ll want to look at using more of the tools and features offered by Salesforce DX. But no, you don’t have to use every part of Salesforce DX in order to get started.

So what tools do you need to use to get started?
The real answer is the Salesforce CLI. You can install the CLI and use it to work with any kind of Salesforce org. If you want to get powerful connections to the Salesforce CLI from within your IDE, then you should also check out VS Code and the Salesforce Extensions for VS Code. Plus, there’s LOTS of other awesome stuff in those extensions.

As you work, you may find that you have to connect to a Dev Hub org in order to execute some CLI commands. We’ll talk more about that below.

Where can we activate a Dev Hub? Can sandboxes be Dev Hubs? Is there a cost to activating a Dev Hub?

Dev Hub functionality can only be enabled in production orgs and Developer Edition orgs. You can not use a sandbox as a Dev Hub org. There is no cost to enabling a dev hub functionality in an org. If you want to explore without enabling your production org as a Dev Hub, that’s fine. As of the Winter ’19 release, you can enable Dev Hub functionality in any Developer Edition org. (No more special sign-up!)

Be aware of that the type of org you use as your dev hub will determine the limits you’ll need to plan for as you work.

Do developers have to have access to Production to use Salesforce DX?

Developers on your team do not have to have access to your production environment to use VS Code + Salesforce Extensions, or the Salesforce CLI. They do need to be able to authenticate into any orgs you expect them to build in, fetch metadata from or deploy changes to. Also refer to the Modify Metadata Permission (in beta), which allows you to give access to an orgs Metadata only.

If you’ve enabled a production org as a Dev Hub, then you’ll want to figure out how you want developers authenticating into that org. There are a few options for managing this. In our demo, we used the force:auth:web:login command to authenticate into to an org with a username/password combination in a browser window. That’s just one of the different ways you can use the CLI to authenticate against orgs. Check out the CLI Command Reference for more.

We’re a new team moving to Salesforce and Salesforce DX. How should we get started?

You should check out our Getting Started with Salesforce DX series. You should make sure you get set up with the Salesforce CLI, VS Code + Salesforce Extensions. Also, look at choosing a source control system, if you don’t already have one set up.

Start thinking now about how you might want to organize your metadata into units within source control. You can check out this post on using deployment dependencies to structure metadata into units for ideas.

How do we get started pulling stuff out of our org and into source? How can we start managing dependencies?

You should start small. As you start to pull metadata out of your org (maybe by using the new ‘force:source:retrieve’ Salesforce CLI command), you’ll want to work with a targeted set of metadata. Starting small not only lets you work more quickly (i.e. how long it takes to pull things down to your machine and send them back up to the server), it will also help you begin to get a better handle on your dependencies.

You can use a package.xml file to tell the CLI what metadata it should pull down from your org. If you’re not as familiar with package.xml files, check out the ‘Building your Package.xml’ section in this blog post about metadata retrieval. If you use tools that help you create a package manifest, like Mavensmate or the Force.com IDE, you can also use them to help you generate a package.xml file.

If we’re not supposed to try and grab everything from our org in one package, how should we organize our different apps in source?

The fast answer here is that you should choose a source control strategy that makes the most sense for your team. The longer answer involves a couple things: 1) making sure you and your team have a solid understanding of source control, 2) understanding how to control dependencies in your Salesforce DX projects (and unlocked packages, if you’re moving in that direction).

If you’re brand new to source control, check out the Git and GitHub Basics module on Trailhead. If you want to get a better understanding of patterns for your repos and managing package versions, check out this post on branching strategies and package versioning. If you want to get a better understanding of ways you can organize your metadata into modules, check out the Trailhead resources at the end of this post (and the post about organizing metadata we mentioned above). You can also see these concepts in action in our Easy Spaces Sample App.

Can we put the same piece of metadata in multiple unlocked packages?

You can, but as a general rule you shouldn’t. It not only pretty much defeats the purpose of packaging (i.e. stable, clearly versioned units of metadata), it’s not even letting you get the full benefit of building good, modular units. It’s also not taking advantage of what unlocked packaging already lets you do to declare and manage dependencies. Last, but not least, you cannot install packages that contain the same piece of metadata into the same org.

If some particular metadata, like an object and fields, are key to multiple packages, you should instead consider whether you can create a base or shared package with just that kind of important metadata. You can then make other unlocked packages ‘dependent’ on whatever combination of shared packages you might need.

Wait. What was with that ‘-w’ parameter for package versioning? How do we know what number to put in there?

When you issue a command to create a package version ‘sfdx force:package:version:create’, you’ll want to get into the habit of adding the ‘-w’ parameter. This optional parameter tells the Salesforce CLI to wait for the results of your package version create request for an amount of time you specify. You’ll provide an integer (i.e. whole number) as a part of the -w parameter, to control the number of minutes the CLI should wait for results. The amount of time a package version request takes will depend on lots of factors: what’s in your package, how many other requests for package versions are queued at any given time, etc. We’ve found 10 minutes (‘-w 10’) to be a reasonable default value.

Having the CLI wait for results lets you take advantage of the new capabilities in the CLI to update the packageAliases information in a Salesforce DX project, which makes managing your packages and package versions much simpler.

What OTHER tools were you all using?

Two things were also asked quite a bit: How can I get autocompletion for the Salesforce CLI? And how can I get information like the current git branch into my shell?


Autocompletion is currently not out-of-the-box implemented into the Salesforce CLI. The engineering team is working on that (safe harbor) for next year. Meanwhile, if you’re using bash or zsh, you can check out Wade Wegner’s repos for zsh autocompletion and bash autocompletion.

For getting additional information like the current git branch, the default username for your scratch org, and many more things we’re using Oh My Zsh, which provides tons of customization options for users of the zsh shell.

Awesome resources

Webinar recording: VS Code for Salesforce Developers
Blog: VS Code for Salesforce Developers: Your Questions Answered
Trailhead Module: Application Lifecycle and Development Models
Trailhead Module: Package Development Readiness

About the authors

René Winkelmeyer works as Principal Developer Evangelist at Salesforce. He focuses on enterprise integrations, Lightning, and all the other cool stuff that you can do with the Salesforce Platform. You can follow him on Twitter @muenzpraeger.

Zayne Turner is a Developer Evangelist with Salesforce. Most recently, she’s been focused on Salesforce DX and ways to adopt modular development and working on the new Trailhead Sample App Gallery, which provides reference architectures and best practices for building apps on the Salesforce platform. You can find her on Twitter @zaynelt.

Posted on Leave a comment

Connecting Your APIs with MuleSoft and Salesforce Identity

At their Dreamforce 2018 breakout session, Chuck Mortimore and Ashley Jose show some love for securely connecting users with data using Salesforce Identity and Mulesoft.

What’s MuleSoft again?

If you attended Dreamforce 2018, it was hard to miss MuleSoft. Considering the Integration Keynote: MuleSoft Connects Every App, Data and Device, multiple breakout sessions, and various booths, the MuleSoft Anypoint Platform was well represented. But in case you missed Dreamforce 2018 or you just need a refresher, here’s a brief overview of what the platform is and what it does.

The MuleSoft Anypoint Platform is a single, unified platform for connecting data, apps, and devices. With MuleSoft Anypoint Platform, you can design, deploy, manage, and secure APIs to unlock data distributed across resources, such as SaaS apps or on-premise servers. MuleSoft also offers increased operational efficiency. For example, you can reuse Customer Support’s order fulfillment status API to provide the Marketing department with customer purchasing history. To learn more, check out Getting Started with MuleSoft: A Quick Start Guide for Salesforce Developers.

MuleSoft and Salesforce Identity: Better together

Yes, MuleSoft Anypoint Platform rocks, but it’s even better when combined with the protection of Salesforce Identity — which is what Salesforce uses to secure CRM connections between customers, partners, and employees. Salesforce Identity is a composite of technologies (such as mobile-first identity, two-factor authentication, and single sign-on) that registers, authorizes, and recognizes users across digital channels. So when you combine MuleSoft Anypoint Platform with Salesforce Identity, you can securely connect customers, partners, and employees to the data they need to complete their jobs.

How do I build it?

By combining the awesomeness of MuleSoft Anypoint Platform with the superhero security of Salesforce Identity, you can safely expose your API assets and build accessible and reusable API networks. Follow these steps to configure Salesforce Identity with Mulesoft Anypoint Platform.

Step 1: Configure Salesforce to protect data stored in Anypoint Platform.

Start by calling Salesforce Customer Support to activate dynamic client registration and token introspection. Also, request an initial access token, which you’ll use in step 2. (Spoiler alert: These features will be generally available and you’ll be able to generate an initial access token in a future release.)

Set up single sign-on (SSO), with either SAML or OpenID Connect, using Salesforce as the identity provider. With SSO, your users can log in to Salesforce and access MuleSoft without a separate MuleSoft login.

Create an OAuth 2.0 connected app that integrates MuleSoft with Salesforce. You use the MuleSoft connected app to automatically create additional child connected apps in Salesforce. The child connected apps are needed for consumers to access data. (We’ll talk more about this process below.)


Mulesoft Anypoint Platform connected app created in Salesforce

Finally, configure the dynamic client registration and token introspection endpoints. The MuleSoft parent connected app sends a request to the dynamic client registration endpoint to create a child connected app. The token introspection endpoint allows the MuleSoft parent connected app to check the current state of an OAuth 2.0 access or refresh token for itself or any of its child apps.

Step 2: Configure Anypoint to trust Salesforce.

In MuleSoft Anypoint Platform, click Access Management | External Identity. Define the following parameters to identify Salesforce as an external identity provider:

  • In Client Registration URL, enter the Salesforce dynamic client registration endpoint. Use this format: https://hostname/services/oauth2/register
  • In Authorization Header, register the initial access token (see step 1).
  • In Client ID, enter the unique consumer key generated in Salesforce for your MuleSoft parent connected app.
  • In Client Secret, enter the consumer secret generated in Salesforce for your MuleSoft parent connected app.
  • In Authorize URL, enter the Salesforce authorization endpoint. Use this format: https://login.salesforce.com/services/oauth2/authorize
  • In Token URL, enter the Salesforce URL for token requests. Use this format: https://login.salesforce.com/services/oauth2/token
  • In Token Introspection URL enter the Salesforce Token Introspection endpoint. Use this format: https://hostname/services/oauth2/introspect)


Registering Salesforce as an external identity manager in MuleSoft Anypoint Platform

For step-by-step instructions, hop over to MuleSoft Help topic about configuring OpenID Connect dynamically. For more information about Salesforce OAuth endpoints, see Understanding OAuth Endpoints.

Step 3: Protect and deploy your APIs.

In MuleSoft’s API Manager, you can create an API gateway to control access to your APIs through policies. Configure the OpenID Connect Token Enforcement policy to require that consumers provide a valid token (which Salesforce provides upon client registration) to access the asset protected by the API gateway.

And don’t forget to head over to API Designer to build your APIs and publish them to Anypoint Exchange.

Step 4: Sit back and relax.

Now that you’ve configured Salesforce Identity and MuleSoft to protect your API assets and deployed your APIs to the portal, let’s see it all come together with this example.

A customer logs in to your Salesforce community to check the status of a recent snowboard order. Instead of filing a case, the customer clicks a new Order Status button to see how close the snowboard is to being delivered. Here’s what happens behind the scenes.

  1. The Order Status button calls a web service this is configured via an External Service, which is part of a flow that is embedded in the community. So users run the flow by clicking the Order Status button.
  2. During runtime, the External Service, acting as the API consumer, queries the API Gateway. In the Gateway, the External Service discovers an API Order Status asset containing data about customer orders. The External Service requests access to the API Order Status asset. (For information about how this external service is configured, watch the recorded session starting nine minutes into it.)
  3. The MuleSoft OAuth 2.0 parent connected app sends a POST request to the Salesforce dynamic client registration endpoint, requesting to create a connected app for the External Service.
  4. Salesforce verifies the initial access token in MuleSoft’s POST request authorization header and creates the child connected app. Salesforce then sends back a response with a client ID and client secret for the new connected app.
  5. The External Service (now a registered connected app) makes a call to the API gateway with its new client ID and client secret. The gateway intercepts the call and engages an OAuth flow. The gateway sends a call to the Salesforce token introspection endpoint to ensure that the new client’s access token is valid.
  6. Salesforce verifies that the access token is valid, and the API gateway gives the External Service access to the API Order Status asset.
  7. The External Service pulls back the data for this customer’s order and sends a response.

All this happens within a few seconds after the customer clicks Order Status, and it’s good news: The snowboard has shipped! Even more good news: The customer was able to self-serve because MuleSoft and Salesforce securely connected them to the data that they needed.

High-level architecture for combining Salesforce Identity with Mulesoft Anypoint Platform.

See it for yourself

To see how the feature in the example was configured, along with the high-level steps covered here, watch the recording of this session from Dreamforce 2018.

Want to learn more about MuleSoft? Take the Build Great APIs and Integrations with MuleSoft trail.

Want to learn more about Salesforce Identity? Check out the Secure Identity and Access Management trail and its Identity for Customers module.

Posted on Leave a comment

Iterate Toward Greatness with Agile AI and Einstein Vision

If you work in any type of software development, you’ve probably heard about or used the agile methodology. Based on agile’s success in software development, the principles and processes have been applied to other disciplines, like project management, manufacturing, marketing, human resources, and even artificial intelligence (AI). Yes, these same principles and processes can be used to deliver data science functionality to your customers.

In this post, we cover what agile AI looks like in practice using an Einstein Vision project as an example. You can also apply all the concepts covered to Einstein Language. Here’s what we’ll do:

  • Follow the minimum viable product (MVP) principle to create a minimum viable dataset (MVD).
  • Learn when and how to add a negative label to improve your model.
  • Further improve the model by using feedback and progressively adding new labels.

Agile AI

A key principle of agile is customer satisfaction through early and continuous software delivery. What does this mean? It means deliver something that customers can use and then continually improve and deliver in regular intervals.

In an agile AI approach, teams break down a data science project into small, manageable, and achievable chunks. The goal is to deliver something usable to the customer with each milestone. The team can then build on what they deliver and continue to incrementally improve.

Let’s look at building a car: Let’s say you’re on a team tasked with building a car. The customer’s ultimate goal is to get from point A to point B as quickly and easily as possible. This image illustrates what delivery might look like using a waterfall versus agile approach.

The waterfall method uses a linear approach to achieve the final product. However, using the agile method, you could first deliver a skateboard. A skateboard is nowhere near the functionality of a car, but it does get you from point A to point B. Then, in each subsequent iteration, you improve the functionality, always delivering something usable.

Data science projects can be high visibility and high risk. Taking an agile approach to your next data science project is one way to reduce risk and ensure success.

So what does an agile approach to data science look like? Let’s take a look at a scenario that uses Einstein Vision to classify images. In this scenario, you use Einstein Vision to create a model that identifies whether an image is an apple or a pear.

During this process, we look at how to approach the project in an agile way, and at techniques you can use to incrementally improve your model and its accuracy. In this scenario, we have three sprints (milestones).

Sprint 1: Create an MVD for your MVP

One agile term you might frequently hear is MVP, or minimum viable product. The MVP is the minimum functionality the team delivers in a given sprint with just enough features to satisfy early customers. Those customers can then provide feedback for future product development. In agile AI, we start with an MVD, or minimum viable dataset.

You start with an MVD because collecting all the data needed for your ideal model could stall your progress. Like with software, you start by defining what is the minimum or smallest number of labels or categories you can start with. Aim to build a decent model just for those labels and not one more.

The first thing you do is gather images that are representative of the types of images presented to the model for classification. Check out the blog post Why Representative Datasets Are Important for Computer Vision Models for more information. For our scenario, we collected a bunch of examples of apple and pear images.

After you pull all the data together, you create a dataset and then train it to create a model. Here’s what the cURL call looks like to send an image of an apple for classification.

curl -X POST -H "Authorization: Bearer <TOKEN>" -H "Cache-Control: no-cache" -H "Content-Type: multipart/form-data" -F "sampleLocation=http://images/red_apple.jpg" -F "modelId=SQ72XU2YTRK5VIAVVKEHXJS6EQ" https://api.einstein.ai/v2/vision/predict

The model returns a prediction similar to this JSON.

{
        "probabilities": [
        {
                "label": "Apple",
                "probability": 0.9982584
        },
        {
                "label": "Pear",
                "probability": 0.0017416108
        }
        ],
        "object": "predictresponse"
}

Good news: The model returns a high probability that the image is an apple. You spent Sprint 1 getting data and creating a model that your customers can use to identify images of apples and pears. Great job!

Sprint 2: Use a negative class to improve predictions

The apple and pear model is a good first iteration because it returns accurate predictions for apples and pears. You used representative data that included a wide variety of images of apples and pears and images of varying quality.

This model works great when the image being classified is an apple or a pear. But what kind of result does the model return with an image of an orange?

The model returns a prediction similar to this JSON.

{
        "probabilities": [
        {
                "label": "Apple",
                "probability": 0.81519085
        },
        {
                "label": "Pear",
                "probability": 0.18480918
        }
        ],
        "object": "predictresponse"
}

The labeled data from which the model was created contains only apples and pears. When you attempt to classify an image of another object, the model can only return a prediction that the image is an apple or a pear. The model only knows what you teach it, and right now it only knows apples and pears. You can further iterate and improve this model by including a negative class.

To add a negative class to your Einstein Vision dataset, you first collect images that aren’t apples and pears. If you don’t have images of your own, you could use the publicly available Caltech 256 dataset or a Kaggle dataset.

Put all the images in a folder named “Other,” and then create a .zip file. When the images are added to the dataset, they’re labeled Other. To add images to a dataset, you use the PUT API call that looks like this.

curl -X PUT -H "Authorization: Bearer <TOKEN>" -H "Cache-Control: no-cache" -H "Content-Type: multipart/form-data" -F "path=http://images/images_for_other.zip" https://api.einstein.ai/v2/vision/datasets/1081008/upload

After you add the images to the dataset, you retrain the dataset to update the model. Now when the model classifies an image of an orange, it returns the Other label with a high percentage.

{
  "probabilities": [
        {
        "label": "Other",
        "probability": 0.9998897
        },
        {
        "label": "Apple",
        "probability": 0.000109375935
        },
        {
        "label": "Pear",
        "probability": 9.801993e-7
        }
  ],
  "object": "predictresponse"
}

From the prediction results, you can tell right away that the classified image of an orange isn’t an apple or a pear. In Sprint 2, you further refined the model and made it easier for your customers to use.

Sprint 3: Improve your model with feedback

You can use the Einstein Vision feedback API calls to provide the ability for your users to give you feedback about predictions. For example, let’s say an image of an orange was sent in, and the model returns a high probability with the label Apple. Now that the model has an Other label, users can give feedback that the image was misclassified, and the actual label for that image is Other.

This cURL call is an example of adding the misclassified image to the dataset with the correct label.

curl -X POST -H "Authorization: Bearer <TOKEN>" -H "Cache-Control: no-cache" -H "Content-Type: multipart/form-data" -F "modelId=SQ72XU2YTRK5VIAVVKEHXJS6EQ" -F "data=@c:dataorange_on_tree.jpg" -F "expectedLabel=Other" https://api.einstein.ai/v2/vision/feedback

After you add feedback examples to the dataset, an admin can review and use them to retrain the dataset to incorporate the feedback into the model. To include feedback examples, use the trainParams object, and pass in the value {"withFeedback": true}. The cURL call to retrain a dataset and include feedback looks like this.

curl -X POST -H "Authorization: Bearer <TOKEN>" -H "Cache-Control: no-cache" -H "Content-Type: multipart/form-data" -F "modelId=SQ72XU2YTRK5VIAVVKEHXJS6EQ" -F "trainParams={"withFeedback": true}" https://api.einstein.ai/v2/vision/retrain

As your model is used in production, you keep track of all the images being classified. Over time, you see that images of oranges are frequently sent for classification. The model correctly identifies those images and returns the Other label because of user feedback. However, the model would be even more useful if it could correctly identify oranges.

To improve the model, you add a new class called “Orange.” To do this, you can start by using the images of oranges previously labeled as Other. You can also gather images of oranges. Add them to the dataset and retrain the model, similar to how you added the label in Sprint 2. Now when an image of an orange is classified, the results return a high probability for the label Orange.

{
  "probabilities": [
        {
        "label": "Orange",
        "probability": 0.9448803
        },
        {
        "label": "Apple",
        "probability": 0.022566011
        },
        {
        "label": "Pear",
        "probability": 0.021325376
        },
        {
        "label": "Other",
        "probability": 0.0094356093
        }
  ],
  "object": "predictresponse"
}

 

In Sprint 3, you used the feedback feature in Einstein Vision to enable your users to send misclassified images back to the model along with the correct label. Based on the model usage, you saw an opportunity to improve the model by adding a label called Orange.

Einstein Vision gives you the tools to improve the accuracy and usability of your deep learning models. Combine these tools with an agile approach and you can quickly deliver functionality to your users, and then continue to evolve and improve that functionality. You can reduce risk and iterate toward greatness!

Resources

Related posts

About the author

Dianne Siebold is a principal technical writer on the platform doc team at Salesforce.

Posted on Leave a comment

5 Ways To Make Your Lightning Community Even Faster

The Lightning Community Builder enables you to easily build out beautiful, pixel-perfect digital experiences for your customers and partners using all clicks or a combination of clicks and code.

While the Lightning Platform is already very fast, there are several things that you can do to optimize your community load times for your users. Optimizing the performance of your community becomes especially important when you begin to build image rich pages or complex pages with lots of Lightning components, or your community members are geographically dispersed around the world. However, all of us want our Lightning community to load as quickly as possible and even the most basic community will load even faster with the following five tricks!

Use a content delivery network (CDN)

Out-of-box, all of the assets used to develop your community (CSS files, JavaScript libraries, images, etc.) are stored on your company’s Salesforce instance. For example, if your Salesforce instance is on NA1, your server is located in North America. The further your customers are from your Salesforce server, the longer it takes to get assets over to their computer and thus, the longer the page load time is for your community. This is why you should use a CDN.

A CDN is essentially a service that will cache your assets on servers around the world, so those assets load quickly no matter where in the world people access your community. We are happy to set you up with a Community Cloud CDN for no additional cost, or you can use your own CDN. Follow these steps to set up the Community Cloud CDN.

Turn on progressive rendering


There are two options for how your community pages load. One option is to load components in a serial fashion. If your pages load quickly, this is a good option for you. However, if you are seeing your pages take a while to load and discover that certain components are taking longer than others to load using the Community Page Optimizer, you might consider turning on progressive rendering. This effectively allows components to load in parallel and for you to prioritize the order in which they load on a given page. It is recommended to prioritize the items that are visible when the page loads over those that are only visible when scrolling down on the page for the best user experience. This is all set up through clicks — no coding required!

Enable browser caching


Rather than loading the same images over and over, you can enable browser caching, which will store a copy of the asset on your community user’s computer. This is enabled by default, so unless you’ve unchecked it, it should be set up and ready to go. Note: this feature is global and applies both to your internal org and all of your communities.

Use smaller image files

The more images that you have on each community page and the larger the image files are, the longer it takes for your computer or mobile phone to download and render those files. Ensure that your marketing department provides you with asset files that are optimized for web. There is no hard and fast rule around the maximum size of an image, but a good guideline would be to keep it under 200KB for best performance.

Use Lightning Data Service in custom components

There are two ways to query data in Lightning components: one is using Lightning Data Service (LDS, which is the new-and-improved Lightning counterpart of the standard Visualforce controller) and the other is using a custom Apex class. As a rule of thumb, you will want to use Lightning Data Service as often as possible because it can return data faster and has built-in caching of data. This is especially important if you have multiple custom components on the same page that would be interacting with the same data. Rather than running custom queries using Apex for a component, LDS will only run the query once per page, without writing any server-side code. It also reuses that data for all the components on that page and even others, improving performance. If you cannot use Lightning Data Service due to your use case (learn more), you can use a custom Apex class paired with storable actions to make use of improved caching.

Before you get started, be sure to download our Community Page Optimizer on the Google Chrome Web Store. Our Community Page Optimizer, a Chrome plug-in, will help you understand the order in which your components are rendering and how long they are taking to render, helping you figure out what might need optimizing in your community. For more in-depth information, check out our developer guide below in the Additional Resources section.

Additional resources

About the author

Kara Allen is a Principal Solution Engineer on the Community Cloud team at Salesforce. Prior to joining Salesforce, she was the Community Cloud practice lead at a consulting firm, where she worked with customers to roll out partner, customer and employee communities. Follow her on the Trailblazer community.

Posted on Leave a comment

VS Code for Salesforce Developers: Your Questions Answered

On our recent webinar VS Code for Salesforce Developers we received more than 300 questions from our (completely awesome) attendees — thank you! In this blog post, we’re summarizing and answering the most asked questions.

Are there costs for using VS Code for Salesforce development?

No. To get started using VS Code for Salesforce development, you need to install a few tools, which are all free of charge:

  • VS Code itself
  • Salesforce Extensions for VS Code (find installation instructions here)
    • If you scroll down on the Marketplace Extensions site to the Documentation for Included Extensions section you can dig into the different extensions and their documentation that are part of the extension bundle. You can also check out the GitHub repository of the extensions, as it is an open-sourced project.
  • Salesforce CLI
    • If you’re new to the Salesforce CLI, make sure to check out the resources at the end of this post.

Do I have to use Salesforce DX?

Salesforce DX includes lots of tools and new functionality. For example: the Salesforce CLI, the Salesforce Extensions for VS Code, scratch orgs and unlocked packages could all be considered parts of Salesforce DX. So, yes, you’ll use some pieces of Salesforce DX if you use VS Code for your Salesforce development projects. But no, you don’t have to use every part of Salesforce DX in order to get started.

The real question here is: what parts of Salesforce DX do I need to get started in VS Code? And the real answer is: Salesforce CLI + Salesforce Extensions for VS Code, as mentioned above.

As you work, you may find that you have to connect to a Dev Hub org in order to execute some CLI commands. If you want to explore without enabling your production org as a Dev Hub, that’s fine. As of the Winter ’19 release, you can enable Dev Hub functionality in Developer Edition orgs!

Can I only use VS Code with scratch orgs?

No. The Salesforce CLI supports working with every type of org.

How do I connect to my orgs?

To work with a particular org, you need to authenticate into that org using the Salesforce CLI. One of the simplest ways to do this is to run a command like:

sfdx force:auth:web:login -a myAmazingOrg -r https://my-custom-domain.my.salesforce.com/

Running this command opens a browser window, which will take you to the login page for a Salesforce org. You can then login with your username and password, and the CLI will establish an OAuth connection with that org. In the example above, we included the -a parameter to assign an alias to the org we’re connecting to. Aliases help you identify orgs you’ve connected to the CLI and make it much simpler to run CLI commands against specific orgs. (Note: the alias will only be set once you’ve successfully logged in or completed the authentication flow.) We also added the optional -r parameter to show how to control the login URL the browser should navigate to. If you wanted to connect to a sandbox, you could modify the command to use -r https://test.salesforce.com/.

For more about ways to connect your orgs to the Salesforce CLI, check out the auth commands section of the Salesforce CLI Command Reference docs.

After connecting, you can then use VS Code and/or the CLI to work with the connected org. Check out this great Dreamforce ’18 session from Peter Chittum about Applying the Salesforce CLI To Everyday Problems for examples. And before you ask — yes, you can authenticate into as many orgs as you like.

How can I get the Apex Replay Debugger? What can I do with it?

The Apex Replay Debugger is part of the Salesforce Extensions for VS Code and is a free tool for developers. How cool is that? Here are the features in a nutshell:

  • You can launch a debug session in VS Code based on a debug log.
  • The debug session runs, based on the debug log, against the Apex source in your VS Code project.
  • You don’t have to be connected to an org during debugging time.
  • You can debug any log (for example, a log sent to you via email). You only need the Apex source to be in Salesforce DX format in your VS Code project.
  • You can debug sessions from any kind of org.

Check out the Apex Replay Debugger Extensions Marketplace documentation for more info. Be sure you don’t confuse the Replay Debugger with the Apex Debugger, which is also part of the Salesforce Extensions. The Apex Debugger extension, which is intended to support live debugging in an org, requires a live debugger license.

What other extensions for VS Code do you use often?

In addition to the Salesforce Extensions, we use a lot of other extensions. The kinds of extensions you’ll find useful will vary based on what languages you’re working with, as well as your personal needs and preferences. Here is an excerpt of ones we use day-to-day:

Do I have to use Git? Is GitHub a requirement?

No. You do not have to use version control or any particular version control system to use VS Code and work with your Salesforce orgs.

However, you will be able to do much more of your work with the Salesforce Extensions and VS Code if you connect your projects to source control. And when you decide to adopt source control into your development, which is a best practice, you can use the source control system of your choice.

If you’re new to source control, we HIGHLY recommend checking out our blog series Getting Started with Salesforce DX. We explain the basics of working with the tools and features of Salesforce DX, how and why you should consider using source control and Salesforce DX in your development, and ways to get started.

How do I work with an existing org? How do I convert stuff into DX format?

If you want to see more of VS Code in action and how to use it with sandboxes and scratch orgs, make sure to register for our upcoming webinar Modern App Dev: Modular Development Strategies on November 6th at 10 AM PST. We’ll talk about the options available for app delivery and development on the platform and how you can use the different tools and features of Salesforce DX to support your development lifecycle. We’ll walk through a real-life scenario and show you how to:

  • Connect to an existing org
  • Get pieces of code and metadata from an org into a Salesforce DX project
  • Set up an initial source control strategy
  • Start using scratch orgs in your development lifecycle
  • Start developing unlocked packages

In the meantime, if you want to work with non-scratch orgs (like a sandbox) you should checkout the new commands (in beta) for the Salesforce CLI, sfdx force:source:retrieve and sfdx force:source:deploy.

You should also check out our series on Modular Development and Unlocked Packaging, where we talk about pulling out pieces of metadata from an org, converting it into modules, and committing it to source control. (And we went through all of this before the beta commands, so you can be much more efficient!)

How can I make a keyboard shortcut in VS Code to combine local save and remote deploy?

Creating keyboard shortcuts in VS Code is really easy. So if you want to create a shortcut to run, say, sfdx force:source:push, you’d simply create a keybinding for that command. In action, it looks like this:

At this time, VS Code doesn’t support chained commands or tying multiple commands to one keybinding.

Learn MOAR (aka Resources)

Trailhead: Quick Start: Salesforce DX
Salesforce Developers Blog: Salesforce for VS Code
Dreamforce 2018 Session: Be An Efficient Developer with VS Code

Be sure to register for our next webinar. We’ll see you then!
René & Zayne

About the authors

René Winkelmeyer works as Principal Developer Evangelist at Salesforce. He focuses on enterprise integrations, Lightning, and all the other cool stuff that you can do with the Salesforce Platform. You can follow him on Twitter @muenzpraeger.

Zayne Turner is a Developer Evangelist with Salesforce. Most recently, she’s been focused on Salesforce DX and ways to adopt modular development and working on the new Trailhead Sample App Gallery, which provides reference architectures and best practices for building apps on the Salesforce platform. You can find her on Twitter @zaynelt.

Posted on Leave a comment

Understanding Experienced Page Time

With Winter ‘19, we have exposed Experienced Page Time (EPT) to everyone on the Salesforce Platform. This EPT metric, better understood as Page Load time, can be explored through the Lightning Usage App which highlights the performance at both a browser and page level. This blog post is going to help you understand how we define and calculate EPT.

Lightning Usage App

The Lightning Usage App is a new way to track adoption and usage of Lightning Experience so you can monitor progress and make informed decisions. With insights like daily active users, the number of users switching to Salesforce Classic per day, and the most visited pages in Lightning Experience, you can better understand your users’ needs and focus on the issues that really matter.

The app is available right from Lightning Experience. Simply click the App Launcher icon in the navigation bar, type Usage in the search box, then click Lightning Usage. In the app, you can click tabs in the ACTIVITY or USAGE sections on the left side of the page to view the associated data.

In the graph featured below, we are able to see the performance metrics for an example org leveraging the Lightning Usage App. This graph will vary from org to org as it is tailored to you. In this particular org, we can see that there was a spike in Android use in June and that Salesforce Mobile has the least EPT overall.

A view of the Browser Performance tab of the Lightning Usage App

 

In our next graph, we can quickly view the performance of our most viewed pages. We can see that in this org, Feed Items tend to load really fast, as well as Chatter.

A view of the Page Performance tab of the Lightning Usage App

 

Defining EPT

EPT is a performance metric Salesforce uses in Lightning to measure page load time. The idea behind this metric is to time how long it takes for a page to load so it’s in a “usable state” for a user to meaningfully interact with it.

The base definition of EPT may be simple, but a lot of things can affect it. We’ll explain more in the next section.

How do we calculate that?

A major difference between Salesforce Classic and Lightning Experience is that pages load progressively in Lightning, while pages in Classic are generated on request by the server. Due to the progressive loading from the client (that any loaded component in the page might arbitrarily load more sub-components at any point in time), measuring when a page has “finished” loading is not straightforward. Since EPT is the page load time that the end users experience, a lot of factors can come into play in calculating the value.

For instance, component implementation details, errors, and caching can all negatively impact EPT. There are also external factors like network quality, browser performance, and user interactions with the page while it’s loading.

Some other things to consider:

  • Lightning UI is rendered client-side and hence is sensitive to browser performance.
  • Lightning UI requires many XHRs to render a page and hence is sensitive to network latency.
  • Complex pages with many custom fields and components will slow down the rendering of a page.

It’s not possible for us to consider all the external and internal factors that can impact the page performance and it’s beyond our control, so we use the following method to calculate the EPT:

The EPT is measured as the time from the PageStart on which no more activity has occurred for at least two frames (~33ms). The reason behind the need of two extra frames is to avoid false positives due to asynchronous calls. This includes any XHR activity, any storage activity or any user interaction or client-side work of any kind in the main JS thread.

What’s next?

As we can see, calculation of EPT is not as straightforward as it seems and there are a lot of factors that impact it. There are certain ways with which the EPT can be impacted in a positive way e.g. using profile-based layouts, hiding contents behind tabs, being on the lookout for some anomalies in network topologies, etc. Watch out for this space for an article on how to take steps to improve EPT.

Want to learn move your app to Lightning? Check out the Improve Your Classic App by Moving It to Lightning Experience trail on Trailhead.

About the author

Venkat Narayan is a Senior Manager with the Product Management Instrumentation team. He and his team build products that empower our customers to measure what is critical to their application.

Posted on Leave a comment

A Look at Robotics Ridge (DF18 Edition)

At TrailheaDX in 2018, a few of us Salesforce evangelists had the brilliant idea to build a demo with robots. The idea was centered around the Fourth Industrial Revolution and showing how Robotics and Artificial Intelligence could play a big role in our lives. We wanted to show how Salesforce could be at the center of everything; your customers, partners, employees, and robots.

After the crazy success and fun we experienced with Robotics Ridge, we decided we wanted to bring it back and make it better than ever for Dreamforce 2018. This is the story of what we added to our demo, what we improved and how we overcame the challenges we faced at TrailheaDX. If you want to read more about the original demo and challenges, check out Philippe’s post.

An order fulfillment system

Robotics Ridge was an order fulfillment process with two separate pipelines. We wanted to show that an ordering process could have two simultaneous flows, similar to a real production factory. The left side and the right side of the stage each ran the same demo, but could be controlled independently.

Image of Robotics Ridge before the show began.

 

If you look carefully at the image above, you’ll notice there are five different robots on stage. From the left to the right you will see an ARM, a custom Linear Robot, an ABB YuMi, another custom Linear Robot and a final ARM. Each of these robots worked together to pick up a requested payload and deliver it the front of the stage near the YuMi.

ARM

Image of the ARM robot on its own

 

The ARM robot is a robot made by the company GearWurx. It is mounted on a tripod and can be controlled in a few ways. It has a manual controller that has a slider for each degree of freedom, but we wanted the arm movements to be automated. We used a Raspberry Pi 3B+ with a servo hat. The ARM was also equipped with a Raspberry Pi Camera attached near the gripper. We wrote a Node.js app to control everything, from the movement to the picture capture.

Linear robot

Image of the Linear Robot robot on its own

 

Our linear robot was a custom robot we built just for TrailheaDX and then modified for Dreamforce. It is controlled by a Raspberry Pi 3B with a Pi-Plate Motor Plate for movement and an Aurdino Mega to control NeoPixel lights that would guide your eye to where your package is, all controlled via a Python script. The lights, not currently pictured, were attached to the front of the robot on the top bar near the cart. The build was inspired by the OpenBuilds Linear Actuator build and was created using Aluminum V-Slot beams from OpenBuilds.

YuMi

Image of the YuMi on its own

 

The YuMi is a robot from ABB. It is a human-friendly robot that has many collision sensors that keep it from hurting anything it hits. To help the YuMi see, we added a set of Raspberry Pi 3B+ with Raspberry Pi Camera to give the YuMi ‘eyes’. These were attached right on top of the YuMi. The YuMi was controlled by a Python script running on an attached computer and the ‘eyes’ were controlled by a simplified version of the ARM’s node.js app.

The demo

Our demo started with Salesforce Mobile. As an attendee, you would arrive at our booth and be directed to our Salesforce Mobile App. You would have the option to select a type of Hacky Sack for order, a Soccer ball, Basketball, or Globe.

Image of the mobile app request screen

 

Once you selected your item, the robots came to life! Your request on the mobile app would trigger a series of Platform Events that would serve as the driving force of the entire pipeline. The first event would tell our ARM robot to move and ‘look’ at the table for your requested item. We equipped our ARM with a camera, It would take a photo and upload it to the platform and then run Einstein Object Detection on the picture. Object detection returns a prediction confidence for what it sees, as well as the location of each item in the photo.

A screenshot showing the last image the ARM 1 device took at Dreamforce.

 

Based on its confidence, it would then move to pick up the item and transfer it to the Linear Robot. In the image above we see two soccer balls and a basketball and would have been able to pick up either if they were requested. The ARM is not a human friendly robot due to the fact that it doesn’t have any motion sensors or anything to tell it that a human is nearby. To protect the humans interacting with our demo after the ARM dropped off the requested item, it would move to a safe ‘home’ position. Once it was all the way home, it would send a platform event to let the system know it was safe to continue.

Our linear bot came next, and its job was to get the item to the YuMi. Once it reached the YuMi, it would send a platform event to Salesforce, which sent the next platform event, but this time to let the YuMi know to do its job. The YuMi would pick up the item and hold it up to a camera to take a final image. This image was then uploaded to the platform and then Einstein Image Recognition would be run on the image. Einstein would then tell Salesforce which item it saw. This served as verification step to make sure the correct item was delivered to the YuMi.

A screenshot showing the last image the YuMi 1 device took at Dreamforce

 

Below you can see how all of the events were sent/received between each system.

Image of the different events sent back and forth

 

Using Cases to troubleshoot and make our demo run smoothly

While one might hope that the demo would always run smoothly, we knew that wasn’t reality. In order to have Quality Assurance (QA), we developed a few ways for our system to let us know when things had gone wrong.

Our first use case was when an item was requested that was no longer ‘in stock.’ If Einstein Object Detection did not find the item you requested on the table, the mobile app would prompt you to add any important details and then save a case to let someone know to restock the table.

The second use case was related to the YuMi and the verification step. If something happened in the pipeline (e.g. the ARM picked up the wrong item, didn’t pick anything up, or something interfered with the linear robot), the YuMi would catch it. Not only would Salesforce Mobile prompt you to save a case, but the YuMi would also drop the item in a different area for misdeliveries.

Big Objects and reporting

Our dashboard of our over 1000 deliveries over the course of the week of Dreamforce

 

In our first version of Robotics Ridge, we were unable to report because platform events do not persist. To overcome this, we created an object to hold information for each delivery. Then, because we knew we would be storing a lot of information, we leveraged Process Builder to archive delivery data and event data into Big Objects.

Process Builder for event and delivery archival

 

Since we were able to report, we could look at the data and tell how well we were performing. Once we had set up for Dreamforce, we ran a few deliveries to make sure everything was running as planned.

Our dashboard from the day before Dreamforce.

 

With our report, we were able to see that our average pickup confidence was 87.5% and that we had a much harder time identifying soccer balls than the other types of payloads. We decided to take over 1000 more photos, tag them, and then retrain our model just before the Dreamforce floor opened to the public. With our newly trained model, we were able to increase our pickup confidence to 96.3% and deliver a roughly equal amount of each payload.

Challenges

Robotics Ridge was a lot of fun, but of course we faced some challenges along the way. Our biggest challenge was this time around, we had to work on an existing code base. When we started to develop our code was scattered in many locations and our Salesforce development wasn’t in Salesforce DX yet. To solve these few issues, we kept track of all of our repositories in Quip. We also migrated our Salesforce Instance to Salesforce DX. To make our migration successful, we had to write a few custom scripts. These allowed us to deploy all of our code and configuration in the right order into any new scratch org.

As with any project, we also had to balance time with feature requests. We would have loved to have more time to add a “Dance Dance 4th Industrial Revolution” event that would make the robots dance and a disco ball that could rise up from the back of the stage. Alas, time was not on our side so we were unable to make that magic come to life.

Final thoughts

Salesforce brings together all of the important things in your business: customers, partners, employees and in the future, your robots. With Salesforce, it was easy to integrate Platform Events, Einstein, Service Cloud and Analytics to show off an end-to-end pipeline. Our Robotic order fulfillment system is a great example of how you can start to leverage the many parts of the Salesforce Lightning Platform and power the future.

How can I get started?

If you want to learn how to bring the full power of the Salesforce Lightning Platform to your next project, you can learn everything you need to know on Trailhead. Get started with this Robotics Ridge trailmix and learn about all of the different features we used to power this amazing demo.

Heather Dykstra
Developer Evangelist, Salesforce
@SlytherinChika