1/183
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
What method(s) can be used to develop in one environment and deploy to multiple related Prod environments?
Sandbox and change sets
Sandbox and managed package
Dev Edition org and managed package
Dev Edition org and unmanaged package
Dev Edition org and managed package
Dev Edition org and unmanaged package
A sandbox is built from Prod, so anything in a sandbox can only be deployed to that Prod and not some other Prod. Managed packages cannot be built in sandboxes.
I want to engage in performance testing. Where should I do it?
Prod
Full Sandbox
Partial Sandbox
Developer Pro Sandbox
Developer Sandbox
Scratch Org
Full Sandbox
We need the data -- and ALL the data -- to see how the org performs. Doing this in Prod would not be a good idea, and Partial would not give us proper results. The other options have no data at all.
I want to engage in stress testing. Where should I do it?
Prod
Full Sandbox
Partial Sandbox
Developer Pro Sandbox
Developer Sandbox
Scratch Org
Don't do it in ANY of these. This is a shared environment, and Salesforce does not want you to try to break it.
How often can each of these sandboxes be refreshed?
Developer
Developer Pro
Partial Copy
Full
Scratch Org
Developer and Developer Pro: 1 day
Partial Copy: 5 days
Full Copy: 29 days
Scratch Org: it's not refreshable
How long can each of these sandboxes last before it is deleted?
Developer
Developer Pro
Partial Copy
Full
Scratch Org
Scratch Org: 30 days
Others: They are the Energizer Bunny and keep going and going and going.
What happens when a sandbox is refreshed?
A new copy of the latest metadata from Prod is created. For Full/Partial, new data is imported as well. That is the good part.
Existing development and data in the sandbox is overwritten. That is the bad part.
What is a "template" and which sandboxes have this option?
Developer
Developer Pro
Partial Copy
Full
Scratch Org
Partial Copy always uses a template, and Full Copy CAN use a template (or just be all data). The template is a way to specify which objects should be copied. Not which DATA in the objects (SF will determine this), but which OBJECTS. Using a template for a Full Copy can reduce what objects you copy and speed up the creation of the copy.
Four things that are best done in a Full sandbox
1. Final UAT (users can see real data)
2. Training users (it's a realistic environment)
3. Scale testing (see what happens when there's a lot of data)
4. Debugging, when a dev can't reproduce the issue in lower environments
What is the difference between Data storage and File storage?
File storage includes files in attachments, Files home, Salesforce CRM Content, Chatter files (including user photos), the Documents tab, the custom File field on Knowledge articles, and Site.com assets.
Data storage is everything else (specifically, object records).
Big Object storage may be handled separately(?)
How do I know whether something can be deployed via the Metadata API?
Check the Metadata Coverage Report
What is a change list / team change list?
A change-tracking tool consisting of a spreadsheet or list where a dev tracks changes
How much DATA storage does each edition (Prod) have?
Starting in late March 2019, Contact Manager, Group, Essentials, Professional, Enterprise, Performance, and Unlimited Editions are allocated 10 GB for data storage, plus incrementally added user storage. For example, a Professional Edition org with 10 users receives 10 GB of data storage, plus 200 MB, for 10.2 GB of total data storage.
How much FILE storage does each edition (Prod) have?
Contact Manager, Group, Professional, Enterprise, Performance, and Unlimited Editions are allocated 10 GB of file storage per org. Essentials edition is allocated 1 GB of file storage per org.
Orgs are allocated additional file storage based on the number of standard user licenses. In Enterprise, Performance, and Unlimited Editions, orgs are allocated 2 GB of file storage per user license. Contact Manager, Group, Professional Edition orgs are allocated 612 MB per standard user license, which includes 100 MB per user license plus 512 MB per license for the Salesforce CRM Content feature license. An org with fewer than 10 users will receive a total of 1 GB of per-user file storage rather than 100 MB per user license.
How much DATA storage does each of these sandboxes have?
Developer
Developer Pro
Partial Copy
Full
Scratch Org
Developer: 200 MB
Developer Pro: 1 GB
Partial: 5 GB
Full: Matches Production
Scratch Org: 200 MB
How much FILE storage does each of these sandboxes have?
Developer
Developer Pro
Partial Copy
Full
Scratch Org
Developer: 200 MB
Developer Pro: 1 GB
Partial: Matches Production
Full: Matches Production
Scratch Org: 50 MB
What metadata is automatically copied into each of these sandboxes?
Developer
Developer Pro
Partial Copy
Full
Scratch Org
All of the traditional sandboxes have metadata that matches production.
A scratch org has no metadata at all, unless it is deliberately included in a configuration file and specifically added.
How many licenses are created in each of these sandboxes?
Developer
Developer Pro
Partial Copy
Full
Scratch Org
Licenses in traditional sandboxes match the licenses in Prod.
A scratch org includes one administrator user with no password by default. Other users can be created with a definition file.
A Prod environment has certain features enabled. Will they be enabled in the sandbox by default?
Yes, for the traditional sandboxes. For a scratch org, they must be deliberately created via a definition file.
I really want data in my Developer/Developer Pro sandbox. Two ways for me to get it there.
1. Manually create sample data
2. Copy some data from Prod via the Salesforce CLI, a data loading tool, or a data migration tool.
I only have a Partial sandbox. I want to check out some automation dealing with Accounts in Missouri. My Prod environment is huge and has Accounts from all over the US. I can't fit them all in Partial. How do I get just the Account records from Missouri?
I am not going to be a happy developer here, because I can't instruct SF which records to include in a template. I can say "Just Account records," but not "Just Missouri Account records." As SF says, "It's all-or-nothing at the object level."
Approximately how much time will it take to refresh/create a Full sandbox?
It depends on the size of Prod. SF warns it can take several days for a large org.
I really need to get a refreshed Full sandbox quickly. Am I out of luck?
SF is working on creating Quick-Create Sandboxes that may copy in 10 minutes rather than days. This may be in preview in Spring 21. They're aiming at both Developer and Full sandboxes.
A Salesforce release is upcoming. Three reasons to stay on the OLD version in the sandbox.
1. Hotfixes
2. Debugging
3. Development that will go live before the release
A Salesforce release is upcoming. Three reasons to go on the NEW version in the sandbox.
1. Development will go live after the release
2. User training on the new release
3. Creating release documentation
A Salesforce release is upcoming. I have gone with the preview in my sandbox, and now I'm ready to deploy. I have an error and can't deploy! What has happened?
I'm trying to deploy something built with the new release into an environment that's still on the old release. I am stuck until Prod is on the new release.
Four potential security problems with sandboxes that Salesforce wants us to think about.
1. Removing SSO requirements or IP restrictions for more convenient access by local development tools and other apps
2. Connecting off-platform apps they're building before those apps have been security tested
3. Testing AppExchange packages before you've reviewed their security
4. Making callouts to insecure external systems
The Indian devs need to test in a sandbox, but Prod data includes SSNs that should only be visible to a limited number of people within the US. What tool should be used?
Data Mask.
Data Mask uses platform-native obfuscation technology to mask sensitive data in any full or partial sandboxes. You can configure different levels of masking, depending on the sensitivity of the data.
Data obfuscation is a way to modify and ensure privacy protection for PI and PII data. You can mask a field's contents by replacing the characters with unreadable results. For example, Brenda becomes gB1ff95-$.
What exactly is Data Mask, and which editions can use it?
Data Mask is a managed package that you install and configure in an Unlimited, Performance, or Enterprise production org. You then run the masking process from any sandbox created from the production org.
I used Data Mask to obfuscate data while QA tested it, but now I want to train some end users in the same sandbox. How do I un-mask the data?
Once your sandbox data is masked, you can't unmask it.
However, you can always refresh the data from production and create a new sandbox org.
The devs don't have admin access in their sandboxes. What (probably) happened, and how do I fix it?
The devs weren't given admin access in prod, so they don't have admin access in their sandboxes either. Someone with admin permissions must log into the sandbox and increase the devs' permissions there.
I want to experiment with how CPQ works. What's the best (SF-recommended) way?
Spin up a scratch org, not a traditional sandbox. To experiment with a traditional sandbox, it would be necessary to get a CPQ license added to production and then create a sandbox. With a scratch org, it's possible just to specify the CPQ feature in the configuration file and spin the scratch org up.
A Salesforce release is coming up. I use scratch orgs for development, not traditional sandboxes. Can I use the preview feature for a scratch org?
Normally, you create scratch orgs that are the same version as the Dev Hub. However, during the major Salesforce release transition that happens three times a year, you can select the Salesforce release version, Preview, or Previous, based on the version of your Dev Hub.
If you don't specify a release value, the scratch org version is the same version as the Dev Hub org.
How many scratch orgs can they have active at any given time? Adam has a Developer Edition, Betty has Enterprise Edition, Carl has Unlimited Edition, and Debra has Performance Edition.
Adam: limit of 3
Betty: limit of 40
Carl and Debra: limit of 100
I've used my limit of scratch orgs. I want to get rid of one so I can build another. Do I just have to wait for it to expire on its own?
No! Go to Dev Hub, find the Active Scratch Org list view, and delete the unwanted scratch org.
Why does Salesforce say that scratch orgs are good for enforcing dependencies?
The idea is that scratch orgs don't start with a copy of Prod's metadata. Everything has to be specified in source control. So you know exactly what your new development needs -- because you specified it.
Under the hood, what is a package (non org-dependent)?
Salesforce builds packages from scratch orgs.
System Containers doesn't want to use scratch orgs because it seems like too much work to get them to a point where they're as usable as traditional sandboxes. What is Salesforce doing to overcome System Containers' scratch-org-hesitancy?
SF is inventing "shapes", which either lets a dev export a configuration file that matches Prod or lets a dev spin up a new org based on "the current shape of production." This is Beta in Winter 21.
Four recommended CI/CD providers
1. Circle CI
2. GitLab
3. Jenkins
4. Travis CI
Two recommended Release Management Partners
1. GearSet
2. Copado
Two recommended IDEs
1. Illuminated Cloud
2. The Welkin Suite
Two recommended (free) CI tools
1. Cumulus CI
2. the Salesforce CLI (command tools, shell commands)
Recommended source control tools
1. SFDX command line tools -- agnostic and allow the use of scripts
2. GitHub
Identify the seven deployment techniques, starting from the simplest/least scalable and progressing to the most complex/most scalable.
1. Manual
2. Change sets
3. Direct metadata deployment
4. Single source (source control) metadata deployment
5. Org-dependent package
6. Unlocked package
7. Managed package
What four situations are best suited to the most scalable and most complex types of deployments?
1. larger deployments of more changes
2. more teams or larger teams working on more projects simultaneously
3. more testing and automation enabling more frequent deployments
4. more consistent and reliable deployments
When should you make manual changes in Prod? (7 situations)
1. When metadata is not supported by the Metadata API (required)
2. Small changes that need to be made fast (recommended) -- e.g., turn off validation rule or give a temporary permission
3. During first time setup before go-live (permissible)
4. Reports (permissible)
5. Listviews (permissible)
6. New field (permissible)
7. Email template (permissible)
When should you NOT make manual changes in Prod? (5 situations)
1. Small changes that may be dangerous, i.e., creating a new validation rule
2. Automated testing -- test data can be hard to clean up
3. Many users -- multiple users may be changing records while you're changing the metadata (or data)
4. Complex changes -- hard to revert if they don't work, no clear record of what was done
5. Apex changes -- not possible
How to mitigate the risks when making manual changes in Prod
Make the changes in a sandbox, test, and then manually recreate in Prod.
Five limitations of change sets
1. They can only work with sandboxes, which must all be created from the same Prod
2. The "View/Add Dependencies" button may not find all dependencies (e.g., the test class for an Apex class)
3. Maximum of 10,000 files (items represented by a checkbox)
4. Sandbox may be on a different release cycle
5. Change sets can't remove metadata or configuration -- no destructive changes
Seven times to use change sets
1. An admin is doing the deployment (no code)
2. A team of old SF devs is doing the deployment (familiar tool)
3. You want all changes to hit Prod simultaneously.
4. You want to validate during business hours but deploy at night (validation can be separated from deployment)
5. You think your deployment may fail and you'll need to add something and try again (change sets can be cloned)
6. You want to back-change a sandbox without a full refresh (change sets can be bidirectional)
7. You want to control who can deploy (permissions control who can use change sets)
Four drawbacks to using change sets. Not fatal drawbacks, but reasons to consider another approach.
1. Tracking changes is slow. It's usually done via an elaborate spreadsheet.
2. Each deployment between sandboxes requires a separate change set. Dev to QA, QA to Prod requires someone to build a change set twice, manually, with lots of checkboxes.
3. Not every metadata type is supported.
4. Indeterminate delays between uploading the outbound change set and deploying the inbound change set.
Can change sets be used in conjunction with the Salesforce CLI?
Yes. A CLI user (or a script) can retrieve a change set by name from a sandbox and extract the source. This is not so much using change sets any longer -- it's a sort of hybrid technique.
Three common ways to use the Metadata API for a direct deployment
1. Ant scripts / Ant Migration Tool / Force.com Migration Tool
2. Salesforce CLI
3. Salesforce Extensions for Visual Studio Code
Three limitations of direct metadata deployments
1. As with change sets, a maximum of 10,000 files are allowed per transaction
2. The total unzipped size of the files cannot exceed 400 MB
3. The Metadata API does not support all file types
Four advantages to using a direct metadata deployment
1. It is easy to repeat (either between the same orgs or to multiple target orgs)
2. It permits destructive changes
3. You can deploy settings that have not been activated in Prod (e.g., a chatbot or Path) -- fewer manual steps in the Setup UI.
4. You can create repeatable deployment scripts to make sure some items that are deployed with the metadata will be in the proper state before/after the deployment
Two disadvantages to direct metadata deployments
1. Hard to trace (metadata from my system looks exactly like metadata modified in Prod)
2. Hard to control (devs may overwrite one another, devs may not test before deploying)
Mitigation for risks of direct metadata deployment
Appoint a Release Manager (single person!) with authority to deploy. This limits some of the risk when multiple devs deploy, but a Release Manager can become a bottleneck too.
Describe the fundamental structure of metadata deployment with source control
Devs do not deploy their work directly beyond the dev environment. They merge it into a source repository branch. A system (CI) deploys the branch merge by getting the source from the repo, authenticating it to the org, and deploying.
What are the limitations of metadata deployments with source control?
The same as direct metadata deployments
What are six advantages of metadata deployments with source control?
1. Lots of tools out there for source control, starting with Git -- it's a familiar problem and well-addressed by now
2. Developers know these tools
3. Source control supports automation
4. Scales well for large teams (or multiple teams)
5. Branches help multiple projects happen simultaneously (even on a small team, there may be new dev work and bug fixes and release checks all happening at once -- branches help organize)
6. Feature branches allow partial deployments (you can deploy branch A but hold back branch B for further work)
What type of automation should I expect to see with metadata deployments with source control?
CI system
GitHub Action
Webhook (reverse API)
Testing automation
Code analysis
Linter/styling
Four situations where not to use metadata deployments with source control
1. Metadata types used don't deploy well
2. Dev team is unfamiliar with source control, and it would take time for them to learn
3. Releases are large and infrequent
4. Can't invest the time to set up this kind of tooling -- dev team is too busy to be off "primary tasks"
Can an unlocked package be upgraded?
No. Only managed packages.
However, you can revert to a previous version of an unlocked package.
Explain first-generation and second-generation managed packages
First-generation managed packages are "Classic Packaging". Originally designed for App Exchange. DO NOT USE NOW.
Second-generation managed packages are created from source and not from the contents of an org. These are now what we refer to as "managed packages." If that term is used without any qualifier, assume it's a second-generation managed package.
Salesforce describes the idea of a package as:
A subset of metadata that is versioned.
You can upgrade to a newer version, or revert to a previous version.
Four advantages to packages (all types)
1. You can uninstall easily, and without knowing exactly what was in the package
2. You can remove some metadata from a package, and when the package is installed, the metadata will be removed from the org
3. Packages can be built on top of other packages and have explicitly declared dependencies
4. Code can be easily shared across multiple orgs with packages
Where can a package be deployed when it is in Beta status?
Only in scratch orgs and sandboxes -- not in Prod.
What is the status when can a package be deployed into Prod?
Released status
How to create a package (three general steps)
1. Specify a folder of source code that you want to become the package
2. Create a package using the CLI, with the owner being a Dev Hub.
3. Create a version of the package (a snapshot of source code at a point in time).
What is an org-dependent package?
- An unlocked package with the special flag (-skipvalidation). It is not a different package.
- A package that allows dependencies outside of the package and not in another package (i.e., they depend on something in the org). You can package metadata that depends on unpackaged metadata in the installation org.
-To create an org-dependent unlocked package, specify the orgdependent CLI parameter on the package:create CLI command.
Give an example of an org-dependent package that includes a Flow
The Flow relies on a custom notification type that can't be packaged. You optimistically assume this custom notification type is in your org and do not include it in the org-dependent package. If it turns out you are wrong, the package will throw an error on installation.
Where do you build an org-dependent package?
- In a sandbox that supports source tracking, so that 1) it contains all the metadata you might depend on, and 2) your changes are tracked and you can pull them into source control
- Use a sandbox that contains the dependent metadata. Consider enabling Source Tracking in Sandboxes to develop your org-dependent unlocked package. Then, test the package in a sandbox org before installing it in your production org.
Two limitations of org-dependent packages
1. Other packages cannot depend on an org-dependent package
2. Org-dependent packages can't depend on other packages (SF won't check the dependency)
Six reasons to choose org-dependent packages for deployment
1. You might want a package, but you don't want to package support
2. You have some metadata that isn't ready to be packaged -- maybe it's tangled and circular
3. You don't control the metadata -- e.g., it's owned by another team at your company or an AppExchange app
4. You can't modularize your existing metadata
5. * especially * You want to deploy over existing unpackaged metadata -- afterwards that metadata will be perceived as part of the package, regardless of the original deployment method
6. You can't create a scratch org that supports the contents of your package, possibly because you can't validate a "normal" package. Org-dependent packages skip the step that validates packages in a scratch org.
Three reasons not to choose an org-dependent package
1. Your package can include/declare all of its dependencies. Use an unlocked package instead.
2. You want to deploy the package to a scratch org. You need to deploy to a sandbox where the dependency is met.
3. All packages take significant time to create, release, and install
When does validation occur with an org-dependent package?
When you use org-dependent unlocked packages, metadata validation occurs during package installation, instead of during package version creation.
Is an org-dependent package "build once, install anywhere"?
No. These packages are designed for specific production and sandbox orgs. You can install them only in orgs that contain the metadata that the package depends on.
What is the code coverage requirement for an org-dependent package?
- We don't calculate code coverage, but we recommend that you ensure the Apex code in your package is well tested.
- For an ordinary unlocked package, 75%.
What is the meaning of "unlocked" in "unlocked package"?
- "Allows changes not via the packaging process"
- If something in the unlocked package is causing problems, it can be modified after installation, in production.
A formula in an unlocked package is not functioning correctly, so the admin fixes it. The dev later re-installs the unlocked package. What will happen?
The org will contain the incorrect version of the formula. The only way to make a permanent change is to update the formula in the package.
Two locations for dependencies with an unlocked package
1. Inside the unlocked package
2. Inside another package explicitly declared in the package's dependencies
Three limitations of unlocked packages
1. You cannot have unpackaged external dependencies
2. You must be able to configure a scratch org to support everything the package requires. (Under the covers, this is what occurs.)
3. 75% minimum Apex test coverage. Test will run as part of the packaging process.
Four reasons to choose unlocked packages
1. The state of the metadata will be known exactly. Packages are linked to source control, and the org has a record of package version deployments.
2. The package can be deployed to a scratch org for testing.
3. You can revert to a previous version.
4. You can deploy over unpackaged metadata
Three reasons NOT to choose unlocked packages
1. Changes in production keep getting overwritten by new package deployments
2. Packages have "formal ancestry requirements", which impede large refactorings
3. All packages take significant time to create, release, and install
A mitigation for risks with unmanaged packages
1. You can skip validation for packages intended for non-production environments. This speeds up the packaging process for low-level deployment/testing. You'll eventually need to validate before it can be deployed to Prod, of course.
Where are managed packages usually found?
On the AppExchange
Two limitations of managed packages
1. It's difficult to delete something that has been exposed
2. A namespace must be associated to the DevHub and anything referring to the package's code will need to use that namespace
Four reasons to choose managed packages for deployment
1. You're a partner wanting to deploy on AppExchange
2. You want to create a package to be used in multiple orgs, and you need to block changes in production
3. You need to formalize and encapsulate internals
4. You need namespaces to help keep code organized and modular -- adding complexity is OK under the circumstances
Five reasons NOT to choose managed packages for deployment
1. You're not an AppExchange partner
2. You're not 100% sure how the metadata might be reused, and you don't want to prevent ALL reuse
3. The package's functionality changes frequently or may need to allow for major refactoring
4. Your team is not absolutely sure how to handle namespaces
5. All packages take significant time to create, release, and install
Where should I go to find out where product gaps are (what can be packaged, source-tracked, supported by the Metadata API)?
Metadata Coverage Report
Example SF likes to use about what's not covered by the Metadata API
Prediction Builder
Example SF likes to use about what can't be included in any package
LiveAgent (Chat)
ExperienceBundle works with unlocked packages. LiveAgentButton does not. I have a Community (Experience) where I use a LiveAgentButton. Can I use an unlocked or a managed package to deploy ExperienceBundle to that Experience/Community?
No.
But this is exactly where to consider using an org-dependent package.
How to handle post-deployment tasks such as:
1. assign a permission set
2. create a new Chatter group
3. populate a custom setting
4. set business hours
5. migrate CPQ rules
1. Manual steps
2. If you're using packages, these can be handled with Apex classes run in a post-install script.
After a package deployment, a release manager can run Apex classes in a post-install script to complete certain tasks (e.g., to create a new Chatter group). Can those Apex classes be run alternatively in a pre-install script? Can they be included as part of the package and installed along with the package?
No, these are POST install tasks that must be performed either manually or as Apex classes run in a post-install script.
When are post-install tasks performed manually, and when are they performed in an external script?
- They're performed manually for manual deployments and change sets.
- They're performed either manually or via external scripts during a metadata deployment or a package installation
When is it safe to deploy a profile change?
- Only with change sets.
- Metadata deployments and packages can become complicated if permissions are deployed at the profile level.
When is it not safe to deploy a profile change, and permission sets are a better option?
- Metadata deployments and packages should be deployed with permission sets rather than profiles.
- When these deployment tools are used and profiles are added, the entire profile will be changed.
- We need more granularity, so use permission sets.
By default, what permissions are granted to the system admin profile when a package is deployed?
- Everything in the package is assigned to the system admin profile by default.
- Therefore, a system admin will have ALL permissions to ALL packages.
What is the name of the option to run only certain tests during validation/deployment of a change set?
Run Specified Tests (UI) / RunSpecifiedTests (DeployOptions object in Metadata API)
Run Local Tests (UI) / RunLocalTests (DeployOptions object in Metadata API)
Run Specified Tests (UI) / RunSpecifiedTests (DeployOptions object in Metadata API)
Four things that may dramatically slow deployment/validation of a change set
1. Number and complexity of Apex tests
2. Users working in the org during deployment -- may cause locking
3. Sharing settings (note: not permissions/profiles necessarily, but anything that might cause role hierarchy recalculation or OWD recalculation -- could even be a new junction object)
4. Field type changes
If a change set includes changes to custom field types, the deployment time might be delayed by an extended period of time because custom field type changes might require changes in a large number of records. To avoid long delays in deployment, an alternative is to apply the field type change manually after the change set is deployed