Category: Salesforce

Working with Modular Development and Unlocked Packages: Part 4

This is the fourth installment in a series exploring how to begin working with your apps in modular pieces, incorporating packages into your app development lifecycle, and what packaging may mean for your team’s change management and release processes. Over the course of this series, we’ll talk about:

Part 1: What even is a package, anyway? How can you start to experiment with segmenting your org? Part 2: How can you start to organize metadata from an app, let alone an entire org, into packages? How do you tackle organizing your metadata and projects in source control? Part 3: What do these changes mean for app builder workflows? What will happen if I install an unlocked package into my production org today? Part 4: How can you define a successful Git branching strategy that works best for most team sizes? How, when and where should packaging be added to your continuous deployment?

After you have untangled your application and set up your package directories and initial package version, it’s time to work on a strategy for managing your package versions and active development projects in your source control systems. Having a good and practical Git strategy will be crucial for your Continuous Integration and Continuous Deployment workflows. This blog post covers how to create such a strategy.

An intro to Git branching

Source control (and with that, Git) has been around for a long time in the Salesforce world. But for those who haven’t worked with Git and the Salesforce Platform yet, let’s look at briefly at a widely adapted Git branching strategy.

(Photo source)

In a nutshell:

All active development work happens in the development branch. Creating new features or fixing bugs happens in dedicated feature or hotfix branches. Those branches inherit from development. After the feature/bug has been successfully tested, the code gets merged back into the development branch. Code that gets shipped into production gets merged from development into master. As versioning is important, the master branch gets tagged with a version number.

While that itself sounds simple, it can get a bit complicated, especially when you have multiple branches running at the same time (which is not uncommon in software development, especially when you work in a team).

Here’s a small real-world example from a code race that I did some years ago.

Blue is master, green is develop, the other are feature and hotfix branches.

Git branching strategy

Building upon that branching strategy and your own company’s requirements, you may have (or should) extend the model. Before you start designing your Git strategy you should have answers to the following questions:

At what stage in the process should automated tests run? When should unlocked package versions be created, tested, and cleaned up? How should promotion (aka installation) of package versions to environments like QA sandboxes or even production be executed?

You may answer some or all of those questions with “I want that on every commit.” But then you’ll experience the delay that

Let’s Talk Deployment (And Reasons Why They Succeed or Fail)

Successfully moving changes between environments doesn’t happen by accident. It takes time and planning and perhaps more than one attempt at deployment. Deployments don’t always succeed, and one of the most frustrating issues can be unravelling what the message(s) returned by the Metadata API really mean when it comes to deployment.

I want to take a look at how you can find information about issues with your deployments, how to identify what failure messages are actually telling you, walk through some common deployment errors and share some useful patterns for structuring your deployments.

Where do failures show up?

The best place to look for errors (and the quality of the messages you’ll find) depends on what mechanism you’re using to deploy your changes. In every Salesforce environment (think scratch org, sandbox, production) you can get an overview of that org’s deployment history under Setup > Deployment Status.

However, the amount of detail available under ‘View Details’ can be very limited, depending on your deployment method.

If you’re using Change Sets to deploy your changes, you get a bit more detail here than other kinds of deployments get. But this will also be the only place you can find error messages related to your change set deployment. (Clicking Setup > Inbound Change Sets > Whatever Change Set Name > View Details takes you to the same data/screen as clicking Deployment Status > View Details.)

If you’re deploying directly with the Metadata API, you’ll see almost no meaningful details in this part of Setup. Depending on your IDE, you may find details about your deployments in your IDE logs, or your IDE may have a dedicated UI for working with your deployments where you can see some error messages.

Using the Salesforce CLI and VS Code, you can get information about your deployment by adding the -w and –json parameters when you attempt sfdx force:mdapi:deploy.

This format can be difficult to parse when dealing with a long series of messages.

In my experience, the easiest place to view granular messages when working with Metadata API deployments is Workbench. The results of an identical deployment to the one above, handled via Workbench, looks like this:

Here, we’re seeing the same details as in the output of the Salesforce CLI. But it’s easier to understand the details and easier to track the deployment status and details for every item in my deployment. These details are crucial when you’re unravelling issues with a deployment.

Understanding Metadata API failure messages

The key to understanding what the Metadata API can tell you about your deployments is learning to distinguish what’s actually meaningful information and what’s just noise.

The stream of information returned by the Metadata API during a deployment is really that: a stream. It’s a feed of everything that’s happening during a deployment attempt. A giant list of errors doesn’t necessarily translate into a giant list of issues that need to be corrected with your deployment.

If you see that you have errors in your deployment,

2.2 Use apex to talk to database and persist data

The Concept

Let’s continue with our previous post. The code is at previous post.

One of the main things we need to think about is how to store data into database so that when we close our current Visualforce page, our information is not totally gone. And that is why we want to use Apex controller to talk to the database and save the data.

Related Code and Explanation

There are several ways for a Visualforce page to invoke Apex methods. But the easiest one is to use the apex:commandButton tag.

In our previous code we have:

<apex:commandButton action=”{!AddTransaction}” value=”Add Transaction”/>

The value here means the text of the button we want to display on the page. So here on the button shows the text: Add Transaction.

And the action means the method we want to invoke in the controller or extension apex class. Here, the method name is AddTransaction. So when user clicks this button. The method AddTransaction is called.

    public pageReference AddTransaction()


        curTrans.Merchandise__c = curMer.Id;

        insert curTrans;


        PageReference pr = new PageReference(‘/’ + curTrans.Id);

        return pr;


Now let’s look at this code. First, please notice that the commandButton’s apex method has a return value of pageReference. pageReference is an Apex object which represents a Salesforce page. It can be either standard or visualforce page. The return value means which page should we redirect to after we have accomplished this method. If the return value is null, it means the current page. Here, the return value pr refers to the added transaction’s page. We achieved this by pointing to the new transaction’s Id.

If We want to point to a Visualforce page, we can assign Page.XXX to pr. Here XXX is your Visualforce page’s name.

Please note that we also have a cancel button here. We can still use the standardcontroller’s actions here in the extension class.


Working with Modular Development and Unlocked Packages: Part 3

This is the third installment in a series exploring how to begin incorporating packages into your app development lifecycle, and what adoption may mean for your team’s change management and release processes. In this post, we’ll look at building unlocked packages and working with versioning, as well as setting up dependencies between packages.

In the last post, we discussed how we approached segmenting an app into package modules. If you haven’t read that post yet, it’s a good idea to go check it out and come back here when you’re caught up.

Turning modules into packages

As I built, I could control which modules deployed and the order they deployed by modifying my sfdx-project.json. If I didn’t want a module to deploy, I removed its entry from the .json. When I wanted to deploy modules in a certain order — simulating my package installation order and testing my dependency management — I added individual modules into my sfdx-project.json one-by-one, and ran sfdx force:source:push in between each addition.

Once I felt reasonably confident in my deployment order and modules, I needed to turn my modules into packages.

Before you start to experiment with package creation and versioning, be aware that in Summer ’18, you can update the metadata in an unlocked package, but you cannot delete an unlocked package. Because I was using a dev hub intended for experimenting with packages, I had the luxury of not worrying that I was going to clutter up my team’s working environment with discarded packages. But even with that luxury, just three of us working on a handful of packaging experiments meant that a “simple” command like sfdx force:package:version:list returned quite a bit of junk. If you’d like to experiment with packaging, but don’t want to put those experiments into your production Dev Hub, I recommend using a trial Dev Hub environment. (Or, if you’re reading this after Summer ’18, go check and see if you can delete or deprecate unlocked packages.)

When it’s time to generate your packages, you’ll issue a series of force:package:create commands. During this step, you’ll want to pay attention to the location of the modules you want to package within your project’s root directory. At a high level, my project looked like this:

|____Easy-Spaces | |____.sfdx | |____config | |____data | |____es-base-code | |____es-base-objects | |____es-base-styles | |____es-images | |____es-space-mgmt

Only four of these modules would become packages: es-base-objects, es-base-code, es-base-styles, and es-space-mgmt. To create my first package, for my es-base-objects module, I issued the following command:

sfdx force:package:create -n ESBaseObjects -d ‘Easy Spaces base objects’ -t Unlocked -r ./es-base-objects


Package type: When you create a package, you are required to say which package type you want to create. The -t Unlocked part of my command creates an unlocked package. Path: The -r parameter allows you to specify a relative path to your package contents. This allows you to issue multiple package create commands without having to change directories. It also helps your final

Running Tests 5x Faster Using SFDX and Heroku CI

For a software engineer, there’s nothing more frustrating than a slow, unreliable test suite. My Heroku Connect team had a lot of fast, dependable unit tests, but our end-to-end integration tests with Salesforce were taking up most of our Continuous Integration execution time. In addition to being slow, sometimes they’d fail because of lingering data from previous test runs, or interference from other test runs.

Heroku has a test runner called Heroku CI that’s similar to CircleCI and Travis CI. As part of a recent hack week, I attempted to use Salesforce DX (SFDX) and the new parallel test runs feature of Heroku CI to speed up our CI test suite. We desperately needed to improve our productivity — we were having to wait as long as 25 minutes for tests to complete. The most time-consuming tests were our integration tests for Salesforce-to-Heroku data syncing.

How things used to be

We ran our tests serially. For Salesforce integration tests, we had a pool of nine Salesforce organizations (orgs for short) that were set up with identical metadata. Each test run used a different long-lived org and we figured that with our relatively small team, using nine orgs to cycle through would be enough. But we had to be extra careful about cleaning up after tests since the orgs were being reused.

Scratch orgs

SFDX scratch orgs are disposable Salesforce orgs that are created on demand. They’re typically used for development and testing by Salesforce developers. Even though our tests didn’t involve Apex or Lightning, scratch orgs were still perfect for our use case. We realized that we could create multiple scratch orgs in every test run so that we could execute Salesforce tests in parallel.

SFDX and Heroku CI

When I started to investigate scratch orgs and CI, I found a lot of reference material for Travis CI and Circle CI, but nothing that explained how to integrate scratch orgs with Heroku CI. I’ll take you through all the steps I followed.

To begin, I installed the Salesforce CLI on my local machine and enabled Dev Hub in a Salesforce org. I also had to create a Connected App in the org, configured for JWT-based authorization, so that my CI scripts could create scratch orgs without any human interaction.

I had to refer to two things that got created when setting up the Connected App: the Connected App’s client ID and the server.key file that contained my private key.

The sfdx force:auth:jwt:grant command requires both the client ID and the server.key file to be passed in. How do we provide them during CI execution? Heroku CI allows you to define CI-only config variables in the Settings page of a Pipeline, so I stored the client ID in a config variable called CI_SF_CLIENT_ID. For the private key file, I couldn’t use a config variable, so I needed to encrypt the file as server.key.enc, save it in source control, and then decrypt it while running in CI. Kudos to David


Dreamforce 2018 Developer Track Call for Presentations is Open!

It’s almost time for Dreamforce 2018 and we want YOU to speak! We’re officially launching our Call for Presentations for the Developer Track today. We’re looking for Salesforce Developers who want to share their knowledge and best practices with the developer community and help fellow Trailblazers skill up, grow, and succeed.

We’ll be closing our Call for Presentations on July 15, 2018. Our aim is to have all acceptances out by August 4, 2018. New to speaking or never attended Dreamforce before? No worries! If you’re knowledgeable and passionate about your topic, you’re already off to a good start. We love first-time speakers and we’ll provide support as you prepare for your talk, including a great Public Speaking Skills module on Trailhead.

What’s the difference between types of sessions?

We have two types of sessions for the Developer Track at Dreamforce: Breakout sessions and theater sessions.

Breakout sessions will be in 40-minute blocks and held in dedicated rooms with seats. We recommend you plan 30-35 minutes for your presentation, leaving the last 5-10 minutes for questions and discussion. A breakout session is ideal if you want to dive deep into your topic.

Theater sessions will be in 20-minute blocks and are held on stages throughout the expo floor. There will be some seating available. These shorter sessions are ideal for overviews and new topics.

What makes a good proposal?

First, you’ll need to start with a good Session Title. Your title should be concise, match what you’re covering, and be interesting to Salesforce Developers. At Dreamforce, there are tons of options for content, so make the title of your session clear.

The Session Abstract field is where you’ll give a short paragraph to quickly tell us what you’re going to talk about. Be creative! Don’t just explain your slide deck, but share your ideas for interacting with the audience: What are you going to build/demo? What discussion will you lead? What will your audience be able to take home with them and show to their companies? Your abstract will eventually be published on public-facing materials and this will be an opportunity to pitch to attendees on why they should attend your session (as well as why our team should select your session to be at Dreamforce!).

Sitting on more than one presentation idea? We ask that you submit a separate proposal for each topic.

What should my topic be?

We’re looking for fun, engaging sessions that provide actionable content that helps Salesforce Developers build their skills, elevate their careers, and deliver innovative technology solutions for their companies. Here are a few session ideas we’d like to see:

How to Build a Lightning App from Start to End A Beginner’s Guide to Lightning Components Process Automation: Actions, Process Builder, or Lightning Flow?

That’s just scratching the surface! We want to hear any and all of your ideas for developer content, so tell us what you want to present.

What’s next?

Once you’ve submitted your proposal, it will be reviewed by our


ICYMI @ TrailheaDX’18: 4 Session Videos About Process Automation

Salesforce makes it simple to automate processes in your apps and make experiences seamless for your customers. As a developer, there is much more you can do with Lightning Flow. In addition to utilizing the tools in Lightning Flow — Process Builder and Cloud Flow Designer — you can use Apex when you need more functionality.

We talked a lot about Lightning Flow and other automation tools at TrailheaDX’18 so we wanted to share recordings of a few sessions. Take a look at these videos and learn how to become an automation master!

Introduction to Lightning Flow  

The best way to learn about Lightning Flow is to just get started! Learn from product management VP Arnab Bose and product marketing manager Wendy Lee on how to build rich, engaging, and guided visual processes to automate your business and how to utilize Lightning Components to orchestrate dynamic screens. (Level: Beginner)

Go with the Flow: Top UX Tips for Lightning Flow  

Learn the top three tips for building user-friendly experiences in Lightning Flow. Principal UX designer Owen Schoppe talks about them in this presentation. (Level: Beginner)

Bring Engaging Experiences to Life with Lightning Flow  

Arnab Bose, senior product management director Alex Edelstein, and principal software engineer Nathan Lipke talk about how you can create powerful, versatile, and visually pleasing process-based apps with Lightning Flow and base and custom Lightning Components. (Level: Intermediate)

Orchestrate Multiple Processes and Flows with Platform Events  

Platform Events is an enterprise messaging system and one of its many functions is allowing flows to communicate with each other. In this presentation by Nathan Lipke and product managers Jason Teller and Jay Hurst, you’ll learn how to coordinate multiple flows by using platform events (Level: Intermediate)

Watch and share all the videos here!

More resources

For more on Lightning Flow, be sure to complete the Lightning Flow module on Trailhead. Also, if you missed out on our Developer Preview Live webinar on May 25th, you can watch it here on-demand. One of the topics we discussed was what’s new in Lightning Flow for the Summer’18 release!

For more on Platform Events, check out the Platform Events Basics module on Trailhead.

Check out other posts in this series

This marks the end of our ICYMI @ TDX18 series but don’t worry, we’ll be launching our Call for Presentations for Dreamforce 2018 real soon! Perhaps we’ll see your session in a blog series. Stay tuned, and thanks for revisiting TDX18 with us!

Integrate External Content into Communities Using CMS Connect JSON

A strong digital identity allows you to be close to your customers across channels. But it’s often a frustrating process when it comes to delivering, managing, and optimizing experiences consistently across every digital touchpoint.

That can sometimes involve having to recreate content, branding or blogs on your various systems. If you have to regularly update that content, it quickly becomes a costly and repetitive manual task to keep your different systems in sync.

This will no longer be the case if you use CMS Connect. CMS Connect allows all of our Community Cloud customers to leverage their existing content and pull it in dynamically into their Lightning Communities. It lets you centralize your content in whatever CMS system you’ve chosen, without having to re-create content when you want to leverage it in your communities. You’ll save hours of time and effort as you manage your digital experiences!

CMS Connect can pull content from Adobe Experience Manager, WordPress, Drupal, Sitecore, SDL, and others that support content structured as JSON or HTML fragments.

This blog focuses on CMS Connect (JSON). For CMS Connect (HTML), see Connect Your Community to Your Content Management System.

You might be wondering when to use CMS Connect JSON and when to go for HTML.

CMS Connect (HTML) allows you to integrate fragments of your HTML web content (i.e. headers, footers, and banners, etc) to have the same branding experience of your website into your communities. CMS Connect (JSON) is best for when you want to bring in content lists (i.e. blogs, articles, product catalogs, files, etc) including authenticated content.

While this post discusses WordPress as an example, it applies to any CMS that supports JSON.

Let’s get started walking through how you set this up in your Lightning Community!

Before using CMS Connect (JSON)

Before diving in, review these prerequisites so everything goes smoothly.

1. Enable CORS

CMS Connect uses Cross-Origin Resource Sharing (CORS) to access external public content on Salesforce side (CORS is not needed if the content you are pulling in is authenticated).

Make sure to add your Community host to the list of trusted hosts in the CORS header in your CMS system, if the JSON endpoint is not accessible to the Salesforce Communities domain.

For more information about CORS, check out this page.

2. Get the JSON URL

Identify the JSON URL for external content. Supported MIME media type is application/json.

For example, here’s how to get an endpoint from WordPress:

And here are some URL examples for different assets in WordPress:

CMS Connect (JSON) Configuration Step 1: Create a CMS connection

Before you can start pulling content into Lightning Community Builder, the first thing you need to do is to actually create and configure the CMS connection in Community Workspaces.

Go to Community Workspaces. Click CMS Connect. Click New to create a new CMS connection. For Name, enter a friendly name for the connection, for example, Capricorn WordPress. For CMS Source, enter the source, such as WordPress. Choose Other only

Working with Modular Development and Unlocked Packages: Part 2

This is the second installment in a series exploring how to begin working with your apps in modular pieces, incorporating packages into your app development lifecycle, and what packaging may mean for your team’s change management and release processes. Over the course of this series, we’ll talk about:

Part 1: What even is a package, anyway? How can you start to experiment with segmenting your org? Part 2: How can you start to organize metadata from an app, let alone an entire org, into packages? How do you tackle organizing your metadata and projects in source control? Part 3: What do these changes mean for app builder workflows? What will happen if I install an unlocked package into my production org today? Part 4: How can you define a successful Git branching strategy that works best for most team sizes? How, when and where should packaging be added to your continuous deployment?

In this post, we’ll look at getting your Salesforce DX project ready to work with multiple packages and segmenting an app into packageable modules. In the last post, we talked about what the shift to package-based delivery can offer teams and walked through extracting metadata from your org to begin migrating to a modular app dev model. If you haven’t read that post yet, it’s a good idea to go check it out and come back here when you’re caught up.

Approaching package construction using deployment dependencies

When I started to think about packaging and creating units for my packages, I started by thinking about deployment. Packaging is meant to make deploying easier, more standardized and repetitive. So I thought, why not look at creating smaller units of metadata (which may or may not turn in to packages) based on my most successful deployment habits?

I asked myself: What are the most common reasons my deployments have failed? Is there a pattern to the deployment order/release shapes I started relying on for my releases?

The resounding answer, to both questions, was dependency management. We talked about this a bit in our last post. Dependencies come in many forms, but managing the dependencies between metadata that cause bottlenecks during deployment seemed like a good place to start. So I decided to build layered packages, based on the dependency management patterns I’ve come to rely on during deployment.

Is this the only way to segment your org into packages? Absolutely not. But you’ll have to address the basic metadata dependencies that affect deployments (like the fact that any code or customizations that interact with custom objects or fields need those objects and fields to exist in order to deploy) no matter what organizing principle you choose.

Building a Salesforce DX project to support multiple packages

Instead of modifying anything about the Salesforce DX project containing my unmanaged package extract, I created a new and empty repository with the command sfdx force:project:create -n easy-spaces -p es-base. My initial sfdx-project.json looked like this:

{ “packageDirectories”: [ { “path”: “es-base”, “default”: true }