🏠 Home 🔒 Record Sharing ⚙ Apex Triggers 🔍 SOQL 💻 LWC 🔗 Integration 🤖 Flows & Automation 🤖 Agentforce & AI ☁ Data Cloud 🎓 DC Course — Free 💵 CPQ 🎯 100 Scenario Questions 🏆 150 Advanced Questions 📧 Marketing Cloud 👥 About Us Start Learning Free →

150 Advanced Salesforce Interview Questions 2026 | SF Interview Pro

150 Advanced Salesforce Interview Questions 2026 | SF Interview Pro
⚡ Advanced Level — 2026 Edition

150 Advanced Salesforce
Interview Questions

Deep-dive questions on Apex internals, Async patterns, Integration, Security, LWC, and Platform Events — for senior roles in 2026.

150
Questions
11
Categories
Senior
Level
Free
No Paywall
Apex Collections, Performance & Internals
Advanced Apex questions asked at senior developer and architect level
Q1–Q15
Q1When would you use a Set instead of a List for storing sObject records, and what is the performance difference? Advanced
🔢Use a Set when you need fast existence checks (O(1) average) and duplicates are not allowed. Use a List when order matters or you need index-based access. For large datasets with frequent contains() checks, Set is significantly faster than List.
📊 Performance Comparison
OperationList ComplexitySet Complexity
contains(element)O(n) — scans entire listO(1) — hash lookup
add(element)O(1)O(1)
get(index)O(1)❌ Not supported
Duplicates✅ Allowed❌ Auto-removed
Order✅ Insertion order❌ No guaranteed order
🏭 Real World

In our manufacturing org, we process 5,000 Order records in a batch and need to check whether each Account is in a "preferred" list. Using List.contains() meant O(n) per check = 5,000 × 5,000 = 25 million iterations. Switching to a Set<Id> of preferred Account IDs reduced it to 5,000 O(1) checks — batch processing time dropped by 80%.

🎯 Key Points for Interviewer
  • Set uses hash-based storage — contains() is O(1) average case
  • List.contains() iterates entire list — O(n) worst case
  • Use Set<Id> for ID lookups inside loops — most common Apex pattern
  • Sets don't support index access — convert to List if needed
Say This in Interview
"I use Set for O(1) existence checks inside loops — especially for ID lookups in trigger context where List.contains() would be O(n) per record and kill performance on bulk operations."
Q2How do you handle NullPointerExceptions when accessing nested relationship fields like contact.Account.Owner.Name? Medium
🛡️Use the Safe Navigation Operator (?.) introduced in Apex — it short-circuits the expression and returns null instead of throwing a NullPointerException when any intermediate object is null.
// ❌ OLD WAY — verbose null checks String ownerName = null; if(contact.Account != null && contact.Account.Owner != null) { ownerName = contact.Account.Owner.Name; } // ✅ NEW WAY — Safe Navigation Operator String ownerName = contact.Account?.Owner?.Name; // Returns null if any part of the chain is null — no exception // ✅ ALSO WORKS with methods String upper = contact.Account?.Name?.toUpperCase();
✅ Three Approaches
  • 1️⃣Safe Navigation Operator (?.): Best for Apex code — clean, readable, minimal lines
  • 2️⃣Formula Field: Create Account_Owner_Name__c formula on Contact — Salesforce handles nulls internally
  • 3️⃣Try-Catch: Last resort — catch NullPointerException but this hides bugs
Say This in Interview
"I use the Safe Navigation Operator (?.) in Apex — it short-circuits on null and returns null instead of throwing an exception, making chained field access clean without nested if-null checks."
Q3What is Database.insert(records, allOrNone=false) and when would you use partial DML success? Medium
💾Database.insert() with allOrNone=false allows partial success — valid records are committed even when some fail, instead of rolling back the entire operation. Returns SaveResult[] to identify which records succeeded and which failed.
// Process results of partial DML Database.SaveResult[] results = Database.insert(leadList, false); List<Id> successIds = new List<Id>(); List<String> errors = new List<String>(); for(Integer i = 0; i < results.size(); i++) { if(results[i].isSuccess()) { successIds.add(results[i].getId()); } else { for(Database.Error err : results[i].getErrors()) { errors.add('Row ' + i + ': ' + err.getMessage()); } } }
🏭 Real World

Our IndiaMART integration receives 200 leads per API call. Some have invalid email formats, some missing required fields. Using allOrNone=true would fail all 200 if even one has bad data. With allOrNone=false, valid 195 leads get created, and we log the 5 failures to an Integration_Error__c object for manual review.

Say This in Interview
"I use Database.insert() with allOrNone=false for integrations where partial success is better than total failure — iterate the SaveResult array to handle successes and failures independently, logging errors to a custom object for visibility."
Q4Explain Database.setSavepoint() and Database.rollback() — when would you use them in a real scenario? Advanced
💾Savepoints allow partial rollback within a transaction — roll back to a specific point without reverting the entire transaction. Used when you want to attempt a risky operation, roll back only that part if it fails, and continue with the rest of the transaction.
// Set savepoint before risky operation Savepoint sp = Database.setSavepoint(); try { // Attempt risky DML insert riskyRecord; // If this throws, we catch it below } catch(DmlException e) { // Roll back only to savepoint — not entire transaction Database.rollback(sp); // Log the error and continue System.debug('Failed: ' + e.getMessage()); } // Safe DML continues — unaffected by rollback above update safeRecords;
⚠️ Important Rules
  • Savepoints only roll back DML — not callouts or platform events already published
  • Each savepoint consumes one DML statement from governor limits
  • Cannot use savepoints across async boundaries (Future, Queueable)
  • Rollback does NOT reset governor limit counters — SOQL/DML counts remain
Say This in Interview
"Savepoints give me partial rollback within a transaction — I set one before a risky operation, catch any exception, roll back to that point, and let the rest of the transaction proceed normally without losing safe DML already performed."
Q5What is Dynamic SOQL and when is it necessary compared to static SOQL? Medium
🔍Dynamic SOQL uses Database.query(String) to execute a SOQL string built at runtime. Use it when the object type, field names, or filter conditions are not known at compile time — typically in configurable integrations, generic utilities, or admin-configurable features.
// Static SOQL — object and fields known at compile time List<Account> accs = [SELECT Id, Name FROM Account WHERE Industry = 'Tech']; // Dynamic SOQL — built at runtime from config String objectName = 'Account'; // from Custom Metadata String fields = 'Id, Name, Industry'; // from config String query = 'SELECT ' + fields + ' FROM ' + objectName; List<sObject> results = Database.query(query); // ⚠️ ALWAYS sanitise user input to prevent SOQL injection String safeValue = String.escapeSingleQuotes(userInput); String safeQuery = 'SELECT Id FROM Account WHERE Name = \'' + safeValue + '\'';
⚠️ Security Warning
  • 🚨SOQL Injection risk: Never concatenate raw user input — always use String.escapeSingleQuotes() or bind variables
  • Bind variables in Dynamic SOQL: Use :variable syntax inside the query string for safe parameterisation
Say This in Interview
"Dynamic SOQL is necessary when object or field names are determined at runtime — like configurable integrations using Custom Metadata mappings. I always use String.escapeSingleQuotes() or bind variables to prevent SOQL injection attacks."
Q6What is the difference between Custom Settings (List vs Hierarchy) and Custom Metadata Types? When do you use each? Advanced
⚙️Custom Metadata Types are preferred in 2026 — their records deploy with Change Sets and SFDX, don't consume data storage, and are easily packageable. Custom Settings are legacy; use Hierarchy Settings only when you need user/profile-specific configuration overrides.
FactorList Custom SettingHierarchy Custom SettingCustom Metadata Type
DeploymentData — separate load neededData — separate load needed✅ Metadata — deploys with code
Data StorageCounts against limitCounts against limit✅ No data storage used
User/Profile override❌ No✅ Yes❌ No
SOQL Access✅ Queryable❌ Not directly✅ Queryable
Packageability⚠️ Data not auto-packaged⚠️ Data not auto-packaged✅ Records packaged with type
Best forStatic lookup dataUser-specific configEverything else in 2026
🏭 Real World

For our Business Central API endpoints (URL, credentials), we moved from Custom Settings to Custom Metadata Types — now the endpoint config deploys automatically with every Change Set instead of requiring a separate post-deployment data load step. The only Custom Setting we kept is Hierarchy type for territory-based email notification preferences per user.

Say This in Interview
"In 2026 I default to Custom Metadata Types — records deploy with code, no data storage cost, easily packaged. I only use Hierarchy Custom Settings when I need user or profile-level configuration overrides that Custom Metadata can't provide."
Q7How do you check and enforce Field Level Security (FLS) in Apex before reading or writing data? Advanced
🔐Apex runs in system mode by default and bypasses FLS. You must explicitly enforce FLS using Security.stripInaccessible(), WITH SECURITY_ENFORCED in SOQL, or Schema describe methods — otherwise restricted fields are exposed to users who shouldn't see them.
// Option 1 — WITH SECURITY_ENFORCED in SOQL (read) List<Account> accs = [SELECT Id, Name, Revenue__c FROM Account WITH SECURITY_ENFORCED]; // Throws exception if user can't read any field in SELECT // Option 2 — Security.stripInaccessible() (read & write) SObjectAccessDecision dec = Security.stripInaccessible( AccessType.READABLE, [SELECT Id, Name, Salary__c FROM Contact] ); List<Contact> safeContacts = dec.getRecords(); // Salary__c stripped if user has no read access // Option 3 — Schema describe (legacy) if(Schema.sObjectType.Account.fields.Revenue__c.isAccessible()) { // safe to read }
✅ Best Practice in 2026
  • Security.stripInaccessible(): Most flexible — works for both read and write, strips fields silently
  • WITH SECURITY_ENFORCED: Simple for SOQL reads — throws exception on inaccessible fields
  • WITH USER_MODE: Newest approach — enforces OWD, FLS, and object permissions in one clause
Say This in Interview
"Apex bypasses FLS by default — I use Security.stripInaccessible() for flexible field-level enforcement on both reads and writes, or WITH SECURITY_ENFORCED in SOQL for read operations, ensuring users never see data their profile restricts."
Q8What is the Mixed DML error and how do you resolve it? Medium
🚫Mixed DML error occurs when you try to perform DML on setup objects (User, Group, GroupMember, PermissionSet, etc.) in the same transaction as non-setup objects. Salesforce treats these as incompatible DML in one transaction.
🔍 Common Setup Objects that Cause This
  • User, Group, GroupMember, UserRole, Profile
  • PermissionSet, PermissionSetAssignment
  • Territory, UserTerritory
// ❌ CAUSES Mixed DML Error insert new Account(Name = 'Test'); // non-setup insert new User(...); // setup — ERROR! // ✅ FIX — Move setup DML to @future @future public static void insertUserAsync(String username) { insert new User(Username = username, ...); } // Now call from trigger/class: insert new Account(Name = 'Test'); insertUserAsync('user@test.com'); // runs in separate transaction
Say This in Interview
"Mixed DML error happens when setup objects like User or PermissionSet are in the same transaction as regular objects — I resolve it by moving the setup DML into a @future method which runs in a separate transaction, or using System.runAs() in test context."
Q9What is the Singleton design pattern in Apex and how do you implement it? Advanced
🏗️The Singleton pattern ensures only one instance of a class is created per Apex transaction. In Salesforce, it is commonly used for trigger handler frameworks and utility classes to avoid repeated initialisation and share state across multiple trigger invocations in one transaction.
public class OrderTriggerHandler { // Private static instance — persists for entire transaction private static OrderTriggerHandler instance; // Private constructor — prevents direct instantiation private OrderTriggerHandler() {} // Public accessor — creates instance only once public static OrderTriggerHandler getInstance() { if(instance == null) { instance = new OrderTriggerHandler(); } return instance; } public void handleBeforeInsert(List<Order__c> orders) { // logic here — same instance reused if trigger fires again } }
🏭 Real World

Our Order trigger handler uses Singleton — the same instance is reused if the trigger fires multiple times in one transaction (e.g., bulk insert + update in same context). This lets us maintain a Set of already-processed record IDs as instance state, preventing redundant processing without using a static variable directly in the trigger.

Say This in Interview
"The Singleton pattern in Apex uses a private static instance variable with a private constructor — the public getInstance() method creates the object only once per transaction, making it ideal for trigger handlers that need to share state across multiple trigger invocations."
Q10How do you create and use custom Exception classes in Apex? When is it better than using System exceptions? Medium
🚨Create custom exceptions by extending Exception class. Use them to throw domain-specific errors with meaningful names instead of generic System.DmlException or NullPointerException — makes debugging faster and allows specific catch blocks in calling code.
// Define custom exception — must end in 'Exception' public class OrderValidationException extends Exception {} public class IntegrationException extends Exception {} // Throw with message if(order.Amount__c < 0) { throw new OrderValidationException( 'Order amount cannot be negative: ' + order.Id ); } // Catch specific exception types separately try { processOrder(order); } catch(OrderValidationException e) { // Handle validation failure logValidationError(e.getMessage()); } catch(IntegrationException e) { // Handle integration failure differently notifyAdmin(e.getMessage()); }
Say This in Interview
"Custom exception classes extend Exception and let me throw domain-specific errors — OrderValidationException vs IntegrationException — so callers can catch each type separately and handle them differently, instead of catching a generic Exception and guessing what went wrong."
Q11What is the WITH SECURITY_ENFORCED clause in SOQL and what does it do? Advanced
🔒WITH SECURITY_ENFORCED tells Salesforce to enforce Field Level Security and Object Permissions at the SOQL level — if the running user doesn't have read access to any field or object in the query, Salesforce throws a QueryException immediately rather than returning restricted data.
// Throws QueryException if user lacks READ on any field List<Contact> contacts = [ SELECT Id, Name, Salary__c, SSN__c FROM Contact WITH SECURITY_ENFORCED ]; // More granular alternative — strips inaccessible fields silently SObjectAccessDecision dec = Security.stripInaccessible( AccessType.READABLE, [SELECT Id, Name, Salary__c FROM Contact] ); // dec.getRecords() — Salary__c removed if no access // dec.getRemovedFields() — tells you what was stripped
ApproachOn FLS ViolationUse When
WITH SECURITY_ENFORCEDThrows QueryExceptionYou want to fail loudly if access is missing
Security.stripInaccessible()Silently removes restricted fieldsYou want to return safe data regardless
WITH USER_MODEEnforces OWD + FLS + Object permsFull sharing + FLS enforcement needed
Say This in Interview
"WITH SECURITY_ENFORCED enforces FLS at query time — it throws a QueryException if the running user lacks read access to any field in the SELECT clause. I use Security.stripInaccessible() instead when I want to silently return only the fields the user can access rather than failing the entire operation."
Q12What are Apex Sharing Reasons and why are they required for custom object Apex Managed Sharing? Advanced
🔐Apex Sharing Reasons are custom labels defined on a custom object that identify WHY a programmatic share record was created. They are required for custom object sharing via Apex because without a RowCause reason, Salesforce cannot track the origin of the share and will delete it if the record owner changes.
// Define Sharing Reason on custom object via Setup first // Then use in Apex: Project__Share shareRecord = new Project__Share(); shareRecord.ParentId = projectId; shareRecord.UserOrGroupId = userId; shareRecord.AccessLevel = 'Read'; // Use the custom sharing reason (defined in Setup) shareRecord.RowCause = Schema.Project__Share.RowCause.Team_Member_Access__c; Database.insert(shareRecord, false); // For STANDARD objects — RowCause must be 'Manual' Account_Share sh = new AccountShare(); sh.RowCause = Schema.AccountShare.RowCause.Manual;
⚠️ Key Rule
  • Custom objects support custom Sharing Reasons — sharing persists even if owner changes
  • Standard objects only support Manual RowCause for Apex sharing — sharing may be deleted on owner transfer
  • Custom Sharing Reasons must be created in Object Manager before use in Apex
Say This in Interview
"Apex Sharing Reasons identify the source of programmatic share records on custom objects — they're required because without a custom RowCause, Salesforce treats shares as Manual and deletes them when the record owner changes, breaking the intended access pattern."
Q13What is the Apex Crypto class used for? Describe a real use case. Advanced
🔑The Apex Crypto class provides cryptographic operations — hashing (MD5, SHA-256), HMAC generation, symmetric encryption/decryption (AES-128/256), and digital signatures. Used for securing webhook payloads, generating checksums, signing API requests, and validating data integrity.
// Use Case 1: Verify webhook payload signature public static Boolean verifyWebhookSignature( String payload, String receivedSignature, String secret ) { Blob key = Blob.valueOf(secret); Blob data = Blob.valueOf(payload); Blob hmac = Crypto.generateMac('HmacSHA256', data, key); String computed = EncodingUtil.convertToHex(hmac); return computed.equals(receivedSignature); } // Use Case 2: Generate secure token Blob randomBytes = Crypto.generateAESKey(256); String token = EncodingUtil.base64Encode(randomBytes); // Use Case 3: Hash sensitive data before storing Blob hashed = Crypto.generateDigest('SHA-256', Blob.valueOf(ssn)); String hashedSSN = EncodingUtil.convertToHex(hashed);
🏭 Real World

Our IndiaMART webhook receiver uses Crypto.generateMac() with HmacSHA256 to verify that incoming lead payloads genuinely come from IndiaMART and haven't been tampered with in transit. The HMAC key is stored in a Named Credential — never hardcoded.

Say This in Interview
"I use Apex Crypto for HMAC-based webhook signature verification — generate the HMAC on the received payload using our shared secret, compare it to the signature in the header, and reject the request if they don't match to prevent spoofed webhook calls."
Q14What are virtual, abstract, and override keywords in Apex? Medium
🏗️virtual allows a method to be overridden but has a default implementation. abstract requires subclasses to implement the method — no default implementation. override marks a method as replacing a parent's virtual or abstract method.
// Abstract class — cannot be instantiated directly public abstract class BaseIntegrationHandler { // abstract method — subclass MUST implement public abstract HttpRequest buildRequest(String endpoint); // virtual method — subclass CAN override, has default public virtual Integer getTimeout() { return 30000; } // final method — subclass CANNOT override public HttpResponse execute(String endpoint) { HttpRequest req = buildRequest(endpoint); req.setTimeout(getTimeout()); return new Http().send(req); } } // Concrete subclass public class BCIntegrationHandler extends BaseIntegrationHandler { public override HttpRequest buildRequest(String endpoint) { HttpRequest req = new HttpRequest(); req.setEndpoint(endpoint); req.setMethod('GET'); return req; } // getTimeout() not overridden — uses parent's 30000ms }
Say This in Interview
"virtual gives a default implementation that subclasses can optionally override; abstract forces subclasses to provide their own implementation with no default; override explicitly marks the method replacing the parent's version — together they enforce the Template Method design pattern in Apex."
Q15How do you use the System.Callable interface and why is it useful? Advanced
🔌System.Callable allows you to invoke Apex classes dynamically using only their class name as a String — without compile-time dependencies. Essential for cross-package communication, plugin architectures, and when the caller and callee are in different packages or namespaces.
// Implement Callable in the target class public class OrderProcessor implements Callable { public Object call(String action, Map<String, Object> args) { if(action == 'processOrder') { Id orderId = (Id) args.get('orderId'); return processOrder(orderId); } throw new IllegalArgumentException('Unknown action: ' + action); } private Boolean processOrder(Id orderId) { return true; } } // Invoke dynamically — no direct dependency needed Callable handler = (Callable) Type.forName('OrderProcessor').newInstance(); Map<String, Object> args = new Map<String, Object>{'orderId' => orderId}; Boolean result = (Boolean) handler.call('processOrder', args);
Say This in Interview
"System.Callable enables dynamic invocation of Apex classes using only a String class name — no compile-time dependency needed. I use it for plugin architectures where the caller in one package can invoke implementations in another package without a direct class reference."
⏱️
Advanced Async Apex Patterns
Batch, Queueable, Future, Finalizer — deep dive questions for senior roles
Q16–Q30
Q16What is Database.Stateful in Batch Apex and when would you use it? Medium
📦Database.Stateful preserves instance variable values across multiple execute() calls in a Batch job. Without it, each execute() call gets a fresh class instance with default variable values — counters, accumulators, and error lists reset between batches.
global class OrderSummaryBatch implements Database.Batchable<sObject>, Database.Stateful { // These persist across all execute() calls global Integer totalProcessed = 0; global Integer totalFailed = 0; global List<String> errorLog = new List<String>(); global Database.QueryLocator start(Database.BatchableContext bc) { return Database.getQueryLocator([SELECT Id FROM Order__c]); } global void execute(Database.BatchableContext bc, List<Order__c> scope) { totalProcessed += scope.size(); // accumulates across batches // process records... } global void finish(Database.BatchableContext bc) { // totalProcessed has sum from ALL execute() calls sendSummaryEmail(totalProcessed, totalFailed, errorLog); } }
⚠️ When NOT to Use Stateful
  • ⚠️Stateful batches use more memory — avoid for large variable collections
  • ⚠️If you only need count at finish — query AsyncApexJob in finish() instead
  • Use Stateful for error accumulation, running totals, or collecting IDs across batches
Say This in Interview
"Database.Stateful preserves instance variables across all execute() calls in a batch — without it, counters and error lists reset between chunks. I use it when I need to accumulate errors or totals across the entire batch for a summary notification in the finish() method."
Q17How do you chain Queueable jobs and what are the limits? Advanced
🔗Queueable jobs chain by enqueueing a new job from within the execute() method of the current job. Salesforce allows a maximum of 50 chained jobs per transaction in synchronous context, and up to 5 levels of chaining from within asynchronous context in Developer Edition.
public class Step1Job implements Queueable { public void execute(QueueableContext ctx) { // Do Step 1 work List<Order__c> orders = [SELECT Id FROM Order__c WHERE Status__c = 'New']; // Process... // Chain Step 2 System.enqueueJob(new Step2Job(orders)); } } public class Step2Job implements Queueable { private List<Order__c> orders; public Step2Job(List<Order__c> orders) { this.orders = orders; } public void execute(QueueableContext ctx) { // Process orders from Step 1 // Can chain Step3Job from here if needed } }
LimitValueNotes
Max chained jobs (Production)UnlimitedEach job runs independently
Max chained in Developer Edition5 levels deepSandbox restriction
Max enqueued per transaction50From sync context
Object params supported✅ YesUnlike @future (primitive only)
Say This in Interview
"Queueable jobs chain by calling System.enqueueJob() inside the execute() method — this creates a new async job that runs after the current one completes. Unlike @future, Queueable supports complex object parameters and job monitoring via AsyncApexJob."
Q18What is the System.Finalizer interface in Apex and when would you use it? Advanced
🏁System.Finalizer runs after a Queueable job completes — whether it succeeded or threw an unhandled exception. It is the only way to detect and respond to Queueable failures without external monitoring tools, enabling automatic retry, alerting, or fallback logic.
public class BCCalloutJob implements Queueable, Database.AllowsCallouts { public void execute(QueueableContext ctx) { // Attach finalizer before risky operation System.attachFinalizer(new BCCalloutFinalizer()); // Make callout... HttpResponse res = makeCallout(); } } public class BCCalloutFinalizer implements System.Finalizer { public void execute(System.FinalizerContext ctx) { if(ctx.getResult() == ParentJobResult.UNHANDLED_EXCEPTION) { Exception ex = ctx.getException(); // Log failure to custom object insert new Integration_Error__c( Message__c = ex.getMessage(), Type__c = ex.getTypeName() ); // Optionally enqueue retry job System.enqueueJob(new BCCalloutJob()); } } }
Say This in Interview
"System.Finalizer attaches to a Queueable job and runs regardless of whether it succeeds or fails — it's the only built-in mechanism to detect Queueable exceptions and implement automatic retry or failure alerting without external monitoring infrastructure."
Q19How do you implement a Dead Letter Queue pattern for failed async Apex jobs? Advanced
📬A Dead Letter Queue in Salesforce is a custom pattern using a custom object to capture failed async job details — payload, error message, retry count, and status — enabling manual review, automatic retry scheduling, and preventing silent data loss from unhandled async failures.
🏗️ Implementation Pattern
  • 1
    Create Failed_Job__c custom object with fields: Payload__c (LongTextArea), Error_Message__c, Retry_Count__c, Status__c (New/Retrying/Dead), Job_Type__c
  • 2
    In Finalizer.execute(), on UNHANDLED_EXCEPTION — insert Failed_Job__c record with full context
  • 3
    Scheduled job runs every hour — queries Failed_Job__c WHERE Status = 'New' AND Retry_Count < 3
  • 4
    For each failed job, deserialise payload, re-enqueue the job, increment Retry_Count
  • 5
    After 3 retries, set Status = 'Dead' and send alert email to integration team
🏭 Real World

Our BC integration Queueable jobs have a Finalizer that writes failures to BC_Integration_Error__c. A Scheduled Apex job retries them hourly up to 3 times. After 3 failures the record goes to 'Dead' status and triggers an email alert to our integration team. Zero silent data loss since implementation.

Say This in Interview
"I implement DLQ using a custom Failed_Job__c object — a Finalizer writes failed Queueable jobs with full payload and error context, a scheduled retry job attempts up to 3 retries with exponential backoff, and after max retries sends an alert and marks the record Dead for manual investigation."
Q20Can you start a Batch Apex job from within another Batch Apex execute() method? Advanced
NO — you cannot call Database.executeBatch() from within a Batch Apex execute() method. You CAN call it from the finish() method of a Batch job. For chaining batches, use finish() to start the next batch, or use a Queueable from execute() to trigger subsequent processing.
global class FirstBatch implements Database.Batchable<sObject> { global Database.QueryLocator start(Database.BatchableContext bc) { return Database.getQueryLocator([SELECT Id FROM Account]); } global void execute(Database.BatchableContext bc, List<Account> scope) { // ❌ CANNOT do this: // Database.executeBatch(new SecondBatch()); — FAILS // ✅ CAN do this — enqueue Queueable from execute: System.enqueueJob(new IntermediateQueueable(scope)); } global void finish(Database.BatchableContext bc) { // ✅ CAN start next batch from finish(): Database.executeBatch(new SecondBatch(), 200); } }
Say This in Interview
"You cannot call Database.executeBatch() from within execute() — only from finish(). To chain batches I start the next batch in finish(), or if I need to trigger processing mid-batch I enqueue a Queueable job from execute() which can then start a batch once it runs."
Q21What is Async SOQL and when would you use it instead of Batch Apex? Advanced
Async SOQL is a Salesforce platform feature that runs complex aggregation queries or large data operations asynchronously in the background, writing results to a BigObject or Custom Object — without touching Apex governor limits. Use it when you need to aggregate billions of records that would exceed Batch Apex limits.
FactorBatch ApexAsync SOQL
Max records~50 million (QueryLocator)Billions (platform-level)
Custom logic per record✅ Full Apex code❌ Only SOQL operations
Governor limitsApplies per batch❌ Not applicable
Results destinationAny object or calloutBigObject or Custom Object
Use caseTransform + process recordsAggregate reporting on massive data
Say This in Interview
"Async SOQL runs platform-level queries on billions of records without Apex governor limits and writes aggregated results to a BigObject — I'd use it for large-scale reporting operations that exceed Batch Apex's practical data volume limits, while Batch Apex handles cases where I need custom Apex logic per record."
Q22How do you implement idempotency in Queueable Apex jobs that make external API callouts? Advanced
🔄Idempotency means running the same operation multiple times produces the same result as running it once. In Queueable callouts, implement it using a unique transaction ID sent in every API call — the external system uses it to deduplicate repeated requests from retries.
✅ Implementation Strategy
  • 1️⃣Unique Idempotency Key: Generate UUID or use Salesforce record ID + timestamp as the key for each request
  • 2️⃣Store in header: Send key as X-Idempotency-Key header — external system returns cached response if key seen before
  • 3️⃣Track locally: Store the key on the Salesforce record — if a retry finds an existing key, it reuses the same key rather than generating a new one
  • 4️⃣Check-before-act: Before creating a record via API, query if it already exists using the external ID
🏭 Real World

Our BC order sync Queueable uses the Salesforce Order ID as the idempotency key in every API call header. If the job retries after a timeout, BC receives the same key, recognises the order was already created, and returns the existing BC order number instead of creating a duplicate. We store the BC order number on the Salesforce record to confirm success.

Say This in Interview
"I implement idempotency using the Salesforce record ID as the idempotency key sent in every API call header — on retry, the external system returns a cached response for that key instead of processing the duplicate request, preventing double-creation without requiring complex state tracking."
Q23What happens when a @future method throws an unhandled exception? Medium
⚠️When a @future method throws an unhandled exception, the exception is logged in the Apex debug log but is otherwise silently swallowed — the calling transaction has already completed, so the exception cannot be propagated. There is no built-in notification mechanism.
🔍 What Happens vs What You Might Expect
  • Calling code does NOT receive the exception — it already completed before @future ran
  • No automatic retry — the failure is permanent unless you implement retry logic
  • No email notification by default — you won't know it failed unless you check debug logs
  • Exception appears in Setup → Apex Jobs → view job details
  • Fix: wrap @future body in try-catch and insert a failure record or send email
@future(callout=true) public static void syncToBC(Id orderId) { try { // make callout } catch(Exception e) { // MUST handle here — caller won't see this exception insert new Async_Error__c( Job_Type__c = 'BCSync', Record_Id__c = orderId, Error__c = e.getMessage() ); } }
Say This in Interview
"@future exceptions are silently swallowed — the calling transaction already completed so the exception can't propagate back. I always wrap @future bodies in try-catch and insert a failure record or send an alert email, because without that, failures are invisible."
Q24How can you monitor the progress and health of a running Batch Apex job? Medium
📊Monitor Batch Apex through Setup → Apex Jobs UI, SOQL queries on AsyncApexJob object, or programmatically in the finish() method via BatchableContext. For real-time progress tracking in a UI, use Database.Stateful to accumulate progress and expose it via a custom LWC dashboard.
// Query batch job status programmatically AsyncApexJob job = [ SELECT Id, Status, JobItemsProcessed, TotalJobItems, NumberOfErrors, CreatedDate, CompletedDate, ExtendedStatus FROM AsyncApexJob WHERE JobType = 'BatchApex' AND Status NOT IN ('Completed', 'Failed', 'Aborted') ORDER BY CreatedDate DESC LIMIT 1 ]; // Calculate progress percentage Decimal progress = (job.TotalJobItems > 0) ? (job.JobItemsProcessed / (Decimal)job.TotalJobItems) * 100 : 0; // In finish() — access context for final stats global void finish(Database.BatchableContext bc) { AsyncApexJob completedJob = [ SELECT Status, NumberOfErrors FROM AsyncApexJob WHERE Id = :bc.getJobId() ]; }
Say This in Interview
"I monitor Batch Apex via SOQL on AsyncApexJob — querying JobItemsProcessed vs TotalJobItems gives percentage completion, NumberOfErrors shows failures, and ExtendedStatus has the failure message. For real-time LWC dashboards I use Database.Stateful to track progress state and expose it via an Apex method the component polls."
Q25What is Platform Events and how do they differ from sObject DML? Medium
📡Platform Events are immutable, published messages based on an event bus. Unlike sObject DML, they are published immediately regardless of transaction outcome — even if the publishing transaction rolls back, Platform Events that were published before the rollback are still delivered to subscribers.
FactorsObject DMLPlatform Events
Transaction rollbackChanges reverted⚠️ Event still delivered!
Data persistenceStored in databaseTransient — not stored long-term
SubscribersTriggers, FlowsFlows, Apex triggers, external systems, LWC
Cross-system❌ Salesforce only✅ External systems can subscribe
OrderingImmediateNear real-time with ReplayId
🏭 Real World

When an Order is Approved in our org, we publish an Order_Approved__e Platform Event. Our warehouse system subscribes via CometD and receives the event in real time to start picking. This decouples Salesforce from the warehouse — if the warehouse is temporarily down, it can replay missed events using ReplayId when it comes back online.

Say This in Interview
"Platform Events decouple publishers from subscribers — the event is delivered even if the publishing transaction rolls back, external systems can subscribe via CometD, and ReplayId allows subscribers to catch up on missed events. Unlike DML, there's no stored record — it's a fire-and-forget event bus."
Q26When would you use Platform Events over Queueable Apex for integration? Advanced
🤔Use Platform Events when you need decoupled, pub-sub architecture where external systems must also subscribe, or when the receiver is unknown at publish time. Use Queueable when the integration target is known, the logic is complex, and you need full Apex control with callout capability.
ScenarioPlatform EventsQueueable Apex
External system must subscribe✅ Best fit❌ Can't subscribe
Multiple subscribers to same event✅ Pub-sub native❌ Would need multiple jobs
Complex retry logic needed❌ No built-in retry✅ Finalizer + retry
LWC real-time UI update needed✅ EmpApi subscription❌ No direct LWC integration
Full Apex callout to external API❌ Subscriber trigger still needs callout✅ AllowsCallouts interface
Say This in Interview
"Platform Events for pub-sub scenarios where multiple subscribers — including external systems and LWC via EmpApi — need to react to the same event. Queueable when I'm making a specific callout with complex retry logic and need full Apex control over the integration."
Q27How do you handle callouts from within Batch Apex execute() methods? Medium
📞Implement Database.AllowsCallouts interface on the Batch class to enable callouts from execute(). Each batch chunk processes separately, so you can make callouts per chunk. However, if any execute() call makes a callout, no DML can come before it in that same execute() invocation.
global class OrderSyncBatch implements Database.Batchable<sObject>, Database.AllowsCallouts { // ← Required for callouts global Database.QueryLocator start(Database.BatchableContext bc) { return Database.getQueryLocator( [SELECT Id, Status__c FROM Order__c WHERE Synced__c = false] ); } global void execute(Database.BatchableContext bc, List<Order__c> scope) { // ✅ Callout first, then DML HttpResponse res = callExternalAPI(scope); // Process response... update scope; // DML after callout is fine } }
⚠️ Important Rules for Callouts in Batch
  • Batch size defaults to 200 — reduce to 1-10 when making one callout per record to avoid timeouts
  • Cannot do DML before callout in the same execute() invocation
  • Each execute() chunk gets fresh callout limits (100 callouts, 10 second timeout per callout)
Say This in Interview
"I implement Database.AllowsCallouts on the Batch class to enable callouts from execute(). I reduce batch size to match the external API's rate limits — typically 1-10 records per chunk — and always make callouts before DML in the same execute() invocation."
Q28How do you design a scheduled Apex job that reliably survives org deployments? Advanced
⚠️Scheduled Apex jobs are automatically aborted when the scheduled class is included in a deployment. You must re-schedule them post-deployment. The best practice is to include a post-deployment script or Setup menu check as part of your release process.
✅ Resilient Scheduling Strategy
  • 1
    Include re-scheduling logic in a custom Admin tab or anonymous Apex script that runs post-deployment
  • 2
    In the Schedulable class itself, check if it is already scheduled before creating duplicate jobs
  • 3
    Use Custom Metadata to store the cron expression — admins can change schedule without code deployment
  • 4
    Add post-deployment verification step in your release checklist to confirm job is running
// Check if already scheduled before re-scheduling List<CronTrigger> existing = [ SELECT Id FROM CronTrigger WHERE CronJobDetail.Name = 'BC Nightly Sync' AND State NOT IN ('COMPLETE', 'DELETED', 'ERROR') ]; if(existing.isEmpty()) { System.schedule('BC Nightly Sync', '0 0 2 * * ?', new BCSyncSchedulable()); }
Say This in Interview
"Deployments abort scheduled jobs — I add a post-deployment step that checks CronTrigger for existing schedules and re-creates them if missing. I store the cron expression in Custom Metadata so schedule changes don't require code deployments."
Q29What are the 24-hour async Apex execution limits and how do they affect architecture? Advanced
Salesforce limits the total number of asynchronous Apex executions to 250,000 per 24-hour period per org (or licenses × 200, whichever is greater). High-volume integrations that enqueue thousands of Queueable jobs daily can exhaust this limit and block all async processing.
LimitValueImpact
Max async executions / 24hrs250,000 (or licenses × 200)Affects all async types combined
Max concurrent Batch jobs5 activeQueues additional batches
Max queued/active Queueable50 per transactionCan't bulk-enqueue
Scheduled jobs max100 active per orgPlan job consolidation
✅ Architectural Implications
  • Batch Apex with large chunk size reduces total job count — 1 batch of 200 = 1 async execution vs 200 Queueable jobs
  • Monitor via Setup → Apex → Apex Jobs or query AsyncApexJob for daily counts
  • Don't enqueue one Queueable per record — batch records and process in groups
Say This in Interview
"The 24-hour async limit of 250,000 executions applies to all async types combined — I architect high-volume integrations to batch records into groups rather than enqueuing one Queueable per record, reducing async execution consumption and staying well within daily limits."
Q30How do you debug a Queueable job that fails intermittently in Production? Advanced
🔍Intermittent Queueable failures are the hardest to debug because async jobs don't surface exceptions to the calling transaction. Use a combination of Apex debug logs with async trace flags, custom error logging to a Salesforce object, and System.Finalizer to capture failure context.
🔍 Debugging Steps
  • 1
    Setup → Debug Logs → set trace flag for the System user or specific user triggering the job — set level to FINEST for Apex
  • 2
    Reproduce the failure — check debug logs for the async execution thread
  • 3
    Attach a System.Finalizer to capture the exception object and write it to an Async_Error__c record with full stack trace
  • 4
    Check for time-of-day patterns in failures — could indicate external API rate limits or maintenance windows
  • 5
    Query AsyncApexJob for ExtendedStatus field — contains the exception message for failed jobs
// Check job status and error AsyncApexJob failedJob = [ SELECT Id, Status, ExtendedStatus, CreatedDate FROM AsyncApexJob WHERE ApexClass.Name = 'BCCalloutJob' AND Status = 'Failed' ORDER BY CreatedDate DESC LIMIT 10 ];
Say This in Interview
"I debug intermittent Queueable failures using three tools — async debug log trace flags to capture the execution, a System.Finalizer to write exception details to a custom object, and AsyncApexJob SOQL to check ExtendedStatus for the failure message — then correlate timing with external system patterns."
🔍
Advanced SOQL & Query Optimisation
SOQL internals, selectivity, polymorphism, and performance tuning
Q31–Q45
Q31What is SOQL Polymorphism and how do you use TYPEOF in a query? Advanced
🔄SOQL Polymorphism allows querying polymorphic fields — fields that can reference different object types, like WhoId on Task (can be Contact or Lead) or WhatId (can be Account, Opportunity etc.). TYPEOF lets you specify different fields to return based on the actual type of the referenced record.
// TYPEOF in SOQL — query different fields based on who WhoId points to List<Task> tasks = [ SELECT Id, Subject, TYPEOF Who WHEN Contact THEN FirstName, LastName, Email WHEN Lead THEN FirstName, LastName, Company END FROM Task WHERE OwnerId = :userId ]; // Access polymorphic field in Apex for(Task t : tasks) { if(t.Who instanceof Contact) { Contact c = (Contact) t.Who; System.debug(c.Email); } else if(t.Who instanceof Lead) { Lead l = (Lead) t.Who; System.debug(l.Company); } }
Say This in Interview
"SOQL TYPEOF handles polymorphic relationship fields like Task.WhoId that can point to different object types — I specify which fields to retrieve per type in the query, then use instanceof in Apex to safely cast the result to the correct type before accessing type-specific fields."
Q32What are the limitations of SOQL relationship queries — how many levels deep can you go? Medium
🔗SOQL supports traversal up to 5 levels in parent-to-child (child relationship subqueries), and up to 5 levels of parent traversal in child-to-parent dot notation. You can have a maximum of 20 relationship fields in a single query across all relationship types.
Relationship TypeMax LevelsExample
Child-to-parent (dot notation)5 levelsContact.Account.Owner.Profile.Name
Parent-to-child (subquery)1 level deep subquerySELECT (SELECT Id FROM Contacts) FROM Account
Total relationship fields20 per queryAcross all relationships combined
Subqueries per query20 subqueriesMultiple child relationships
// Max parent traversal — 5 levels [SELECT Contact.Account.Owner.Profile.UserRole.Name FROM Contact] // Parent-to-child — only 1 subquery level allowed [SELECT Id, (SELECT Id FROM Contacts), (SELECT Id FROM Opportunities) FROM Account] // ❌ Can't nest subquery inside subquery
Say This in Interview
"SOQL allows up to 5 levels of parent traversal using dot notation and 20 total relationship fields per query. Parent-to-child subqueries don't support nesting — you can't put a subquery inside a subquery. For deeper relationships I split into multiple queries and join in Apex using Maps."
Q33How does using a formula field in a SOQL WHERE clause affect query performance? Advanced
⚠️Formula fields cannot be indexed in Salesforce. Filtering on a formula field in a WHERE clause always results in a full table scan — even on a large object with millions of records. This causes non-selective queries, potential timeouts, and UNABLE_TO_LOCK_ROW errors on high-volume objects.
✅ Solutions
  • 1️⃣Create a stored field: Replace formula with a regular field populated by a trigger or Flow — stored fields can be indexed
  • 2️⃣Filter on source fields: If formula is Account.Name + '-' + Contact.Name, filter on both source fields separately
  • 3️⃣Custom Index: Request Salesforce Support to create a custom index on a stored field derived from the formula logic
  • Never filter on cross-object formula fields in WHERE clause on large datasets
🏭 Real World

We had an Order__c formula field Total_INR_Value__c that calculated INR equivalent using exchange rate. Filtering on it caused consistent timeouts on our 500K order dataset. We replaced it with a stored field populated by a trigger — indexed it — and query performance improved from timeout to under 1 second.

Say This in Interview
"Formula fields can't be indexed — filtering on them in WHERE causes full table scans regardless of record count. I replace frequently-filtered formula fields with stored fields populated by triggers or Flows, which can be indexed and make queries selective."
Q34How do you query records where a multi-select picklist field contains a specific value? Medium
🔍Multi-select picklist fields store values as semicolon-separated strings. Use the INCLUDES keyword in SOQL to check if the field contains a specific value — this is the only correct way, as = operator would only match the exact full string.
// ✅ Correct — INCLUDES keyword [SELECT Id, Name, Products__c FROM Account WHERE Products__c INCLUDES ('Bioreactors')] // Multiple values — records that have ANY of these [SELECT Id FROM Account WHERE Products__c INCLUDES ('Bioreactors', 'Fermenters')] // EXCLUDES — records that do NOT have the value [SELECT Id FROM Account WHERE Products__c EXCLUDES ('Discontinued')] // ❌ WRONG — only matches if EXACTLY this is the full value [SELECT Id FROM Account WHERE Products__c = 'Bioreactors'] // Misses: 'Bioreactors;Fermenters' — treated as different string
Say This in Interview
"Multi-select picklist fields store values as semicolon-separated strings — I use INCLUDES() in SOQL to check if a specific value is present in the field. Using = only matches the exact full string and misses records with multiple selections, making INCLUDES the only correct approach."
Q35What is the difference between COUNT() and COUNT(field) in SOQL aggregate queries? Easy
📊COUNT() counts all rows including nulls and returns a single Integer directly. COUNT(fieldName) counts only non-null values for that specific field and is used with GROUP BY to count per group. In aggregate result context, you access the value differently.
// COUNT() — counts all rows, returns Integer directly Integer total = [SELECT COUNT() FROM Account WHERE Industry = 'Tech']; // COUNT(field) — counts non-null values of that field, use with GROUP BY List<AggregateResult> results = [ SELECT Industry, COUNT(Id) recordCount FROM Account GROUP BY Industry ]; for(AggregateResult ar : results) { String industry = (String) ar.get('Industry'); Integer count = (Integer) ar.get('recordCount'); } // COUNT(field) vs COUNT() — difference // COUNT(Email) only counts rows where Email is not null // COUNT() counts ALL rows regardless of null fields
Say This in Interview
"COUNT() returns a single Integer of all rows including nulls and can be used alone. COUNT(fieldName) counts only non-null values of that field, is used inside AggregateResult queries with GROUP BY, and is accessed via ar.get('aliasName') — the key distinction is null handling and how you access the result."
Q36How does the FOR VIEW and FOR REFERENCE clause in SOQL work? Advanced
👁️FOR VIEW and FOR REFERENCE are SOQL clauses that update the LastViewedDate and LastReferencedDate fields on queried records. FOR VIEW updates both fields (simulates the user viewing the record). FOR REFERENCE only updates LastReferencedDate. They're used by custom components that need Salesforce's Recent Items feature to work correctly.
// FOR VIEW — updates LastViewedDate AND LastReferencedDate List<Account> viewed = [ SELECT Id, Name FROM Account WHERE Id = :recordId FOR VIEW ]; // Now record appears in Recent Items and List Views "Recently Viewed" // FOR REFERENCE — only updates LastReferencedDate List<Account> referenced = [ SELECT Id, Name FROM Account WHERE Id IN :relatedIds FOR REFERENCE ]; // Record referenced but not "viewed" by user
Say This in Interview
"FOR VIEW in SOQL updates both LastViewedDate and LastReferencedDate on queried records — I use it in custom LWC components that display records so they appear in the user's Recent Items list, matching the behaviour of standard Salesforce record pages."
Q37What is the difference between SOQL LIKE operator and FIND in SOSL? Easy
🔍SOQL LIKE does pattern matching on a single field using % (any characters) and _ (single character) wildcards. SOSL FIND does full-text search across multiple fields and objects simultaneously using Salesforce's search index — much faster for text search but less precise than LIKE.
FeatureSOQL LIKESOSL FIND
Search scopeOne fieldMultiple fields + objects
Wildcards% and _* and ?
Uses search index❌ Full scan✅ Yes — fast
Numeric search✅ Yes❌ Text only
Result typeList<sObject>List<List<sObject>>
Min chars for wildcardNone2 chars with *
Say This in Interview
"SOQL LIKE does field-level pattern matching but doesn't use the search index — it's a full scan. SOSL FIND uses Salesforce's search index to search text across multiple objects and fields simultaneously, making it significantly faster for text search scenarios but limited to text fields only."
Q38What is the WITH USER_MODE clause in SOQL and how does it differ from WITH SECURITY_ENFORCED? Advanced
👤WITH USER_MODE enforces the running user's complete sharing model — OWD, sharing rules, role hierarchy, object permissions, AND field-level security all in one clause. WITH SECURITY_ENFORCED only enforces FLS and object permissions but does NOT enforce sharing rules (record visibility).
EnforcementWITH SECURITY_ENFORCEDWITH USER_MODE
Field Level Security (FLS)✅ Yes✅ Yes
Object permissions✅ Yes✅ Yes
Sharing rules / OWD❌ No✅ Yes
Role hierarchy❌ No✅ Yes
On violationThrows exceptionThrows exception
Introduced in APIv48.0v55.0 (newer)
Say This in Interview
"WITH USER_MODE is the more complete security enforcement — it respects the full sharing model including OWD and sharing rules, not just FLS. WITH SECURITY_ENFORCED only enforces FLS and object permissions. In 2026 I use WITH USER_MODE for truly user-context queries, equivalent to running code 'with sharing'."
Q39How do you use SOQL date literals and what are the most useful ones? Easy
📅SOQL date literals are predefined keywords representing dynamic date ranges — they automatically adjust based on the current date, eliminating hardcoded dates in queries. Essential for reports, dashboards, and scheduled jobs that need relative date filtering.
Date LiteralWhat It MeansUse Case
TODAYCurrent date onlyToday's records
THIS_WEEKSun to Sat of current weekWeekly reports
THIS_MONTH1st to last day of monthMonthly totals
THIS_FISCAL_QUARTERCurrent fiscal quarterQuarterly pipeline
LAST_N_DAYS:30Last 30 calendar daysRolling 30-day reports
NEXT_N_DAYS:7Next 7 daysUpcoming tasks/renewals
LAST_MONTHFull previous monthMonth-over-month comparison
// Orders created this month [SELECT Id FROM Order__c WHERE CreatedDate = THIS_MONTH] // Opportunities closing in next 30 days [SELECT Id, Name FROM Opportunity WHERE CloseDate = NEXT_N_DAYS:30] // Records modified in last 7 days [SELECT Id FROM Account WHERE LastModifiedDate = LAST_N_DAYS:7]
Say This in Interview
"SOQL date literals like THIS_MONTH, LAST_N_DAYS:30, and NEXT_N_DAYS:7 are dynamic — they adjust automatically based on the current date so I never hardcode dates in scheduled jobs or reports. LAST_N_DAYS:N includes today, NEXT_N_DAYS:N starts from tomorrow."
Q40What causes a SOQL query to be non-selective and how do you fix it? Advanced
⚠️A SOQL query is non-selective when it would return more than 10% of total records on standard objects or more than 3% on large objects (1M+ records). Non-selective queries cause full table scans, timeouts, and the "System.QueryException: Non-selective query against large object type" error.
🔍 Common Causes & Fixes
  • Filtering on non-indexed field: Fix — request custom index from Salesforce Support or filter on an indexed field (Id, Name, External ID, Lookup fields)
  • Formula field in WHERE: Fix — store the computed value in a regular field + index it
  • NULL filter on non-indexed field: WHERE Field__c = null is always non-selective
  • LIKE '%searchterm' (leading wildcard): Fix — never use leading % — use 'searchterm%' instead
  • Combine filters: Multiple indexed field filters in AND can make a query selective even if individual filters aren't
Say This in Interview
"Non-selective queries scan the entire table and timeout on large datasets. I fix them by filtering on indexed fields, never using leading wildcards or formula fields in WHERE, requesting custom indexes from Salesforce Support for frequently-queried non-standard fields, and combining multiple indexed filters to improve selectivity."
Q41What is the SOQL FOR UPDATE clause and what are its risks? Advanced
🔒FOR UPDATE locks the queried records for the duration of the transaction — preventing other transactions from modifying them simultaneously. While this prevents dirty reads, it can cause UNABLE_TO_LOCK_ROW exceptions if other transactions are also trying to modify the same records concurrently.
// Lock records for duration of transaction List<Order__c> orders = [ SELECT Id, Status__c FROM Order__c WHERE Id IN :orderIds FOR UPDATE ]; // No other transaction can UPDATE these records until ours completes for(Order__c o : orders) { o.Status__c = 'Processing'; } update orders; // Lock released when transaction commits
⚠️ Risks and Mitigations
  • ⚠️UNABLE_TO_LOCK_ROW: If another transaction holds a lock on the same records — both fail. Use try-catch and retry logic.
  • ⚠️Deadlock: Two transactions each waiting for the other's locked records — both timeout after 10 seconds
  • When to use: Only when race conditions on the same records are a genuine business risk — e.g., order reservation systems with concurrent requests
Say This in Interview
"FOR UPDATE locks records for the transaction duration to prevent concurrent modifications — useful for reservation systems where two users might try to claim the same inventory simultaneously. The risk is UNABLE_TO_LOCK_ROW exceptions when records are already locked, so I implement retry logic with exponential backoff."
Q42How do you query Custom Metadata Type records efficiently in Apex? Medium
⚙️Custom Metadata records are queried via SOQL on the __mdt object. Salesforce caches Custom Metadata aggressively — after the first query in a transaction, subsequent accesses hit the cache not the database. Store the result in a static map for the pattern of "query once, use many times."
// Efficient pattern — static cache public class BCConfigService { private static Map<String, BC_Config__mdt> configCache; public static BC_Config__mdt getConfig(String name) { if(configCache == null) { configCache = new Map<String, BC_Config__mdt>(); for(BC_Config__mdt c : [ SELECT DeveloperName, Endpoint__c, Timeout__c FROM BC_Config__mdt ]) { configCache.put(c.DeveloperName, c); } } return configCache.get(name); } } // One SOQL query for entire transaction — all callers reuse cache
Say This in Interview
"I query Custom Metadata with one SOQL statement at the start and store results in a static Map — all subsequent callers in the same transaction get instant cache access. Custom Metadata's own caching means even repeated SOQL queries on __mdt are optimised, but a static Map eliminates even those."
Q43What is SOQL injection and how do you prevent it in Dynamic SOQL? Medium
🚨SOQL injection is a security vulnerability where malicious user input modifies the structure of a Dynamic SOQL query — similar to SQL injection. An attacker can bypass filters, access unauthorised records, or modify query logic by injecting SOQL keywords into user-supplied string values.
// ❌ VULNERABLE — user input directly concatenated String name = ApexPages.currentPage().getParameters().get('name'); String query = 'SELECT Id FROM Account WHERE Name = \'' + name + '\''; // Attacker enters: test' OR Name != ' → returns ALL accounts! // ✅ FIX 1 — String.escapeSingleQuotes() String safeName = String.escapeSingleQuotes(name); String safeQuery = 'SELECT Id FROM Account WHERE Name = \'' + safeName + '\''; // ✅ FIX 2 — Bind Variables (preferred) String query = 'SELECT Id FROM Account WHERE Name = :name'; List<Account> results = Database.query(query); // 'name' variable is bound safely — injection impossible
Say This in Interview
"SOQL injection lets attackers manipulate Dynamic SOQL by injecting SOQL operators into string inputs. I prevent it using bind variables (:variableName syntax) as the primary defence — they're type-safe and injection-proof. String.escapeSingleQuotes() is the fallback when bind variables aren't applicable."
Q44What is the OFFSET clause in SOQL and when would you use it? Medium
📃OFFSET skips a specified number of records from the start of the result set — enabling page-by-page pagination in SOQL queries. Combined with LIMIT and ORDER BY, it allows building paginated record lists in custom LWC or Apex components.
// Page 1 — first 20 records List<Account> page1 = [SELECT Id, Name FROM Account ORDER BY Name LIMIT 20 OFFSET 0]; // Page 2 — next 20 records List<Account> page2 = [SELECT Id, Name FROM Account ORDER BY Name LIMIT 20 OFFSET 20]; // Calculate OFFSET from page number Integer pageSize = 20; Integer pageNumber = 3; // user navigated to page 3 Integer offset = (pageNumber - 1) * pageSize; // = 40 List<Account> page3 = Database.query( 'SELECT Id, Name FROM Account ORDER BY Name LIMIT ' + pageSize + ' OFFSET ' + offset );
⚠️ OFFSET Limitations
  • ⚠️Maximum OFFSET value is 2,000 — cannot paginate beyond 2,000 records using OFFSET
  • ⚠️Each OFFSET query fetches records from scratch — not cached. Performance degrades on high page numbers
  • For records beyond 2,000 — use cursor-based pagination with WHERE clause + last seen ID
Say This in Interview
"OFFSET enables simple page-based pagination up to 2,000 records. For larger datasets I switch to cursor-based pagination — storing the last record's ID and using WHERE Id > :lastId ORDER BY Id LIMIT N, which works at any scale and is more efficient than high OFFSET values."
Q45How do you use SOQL aggregate functions in a Batch Apex start() method? Advanced
📊Batch Apex start() must return either a Database.QueryLocator or an Iterable. Aggregate SOQL (with GROUP BY) returns List<AggregateResult> — you can return this as an Iterable<AggregateResult> from start() to batch process aggregate results rather than individual records.
global class AggregateProcessBatch implements Database.Batchable<AggregateResult> { // Return Iterable of aggregate results global Iterable<AggregateResult> start(Database.BatchableContext bc) { return [ SELECT AccountId, SUM(Amount) totalAmount, COUNT(Id) orderCount FROM Order__c GROUP BY AccountId ]; } global void execute(Database.BatchableContext bc, List<AggregateResult> scope) { for(AggregateResult ar : scope) { Id accId = (Id) ar.get('AccountId'); Decimal total = (Decimal) ar.get('totalAmount'); // Update account summary fields... } } }
Say This in Interview
"Batch Apex can process AggregateResult by returning an Iterable<AggregateResult> from start() — I use this pattern when I need to update summary records based on aggregated child data, like rolling up order totals to Account fields, processing one AccountId's summary per batch chunk."
🔗
Advanced Integration Patterns
OAuth flows, Circuit Breaker, Composite API, mTLS, and enterprise integration architecture
Q46–Q60
Q46What is the Circuit Breaker pattern in Salesforce integration and how do you implement it? Advanced
The Circuit Breaker pattern prevents an integration from repeatedly calling a failing external system. Like an electrical circuit breaker — after a threshold of failures, the circuit "opens" and stops all calls for a cooldown period, returning a fallback response instead of hammering a down system.
🔄 Three States
StateBehaviourTransition
CLOSED (normal)Calls pass through to external APIOn failure threshold → OPEN
OPEN (failing)All calls rejected, fallback returnedAfter cooldown period → HALF-OPEN
HALF-OPEN (testing)One test call allowed throughSuccess → CLOSED / Fail → OPEN
// Store circuit state in Custom Metadata or Custom Settings public class CircuitBreaker { private static Integer FAILURE_THRESHOLD = 5; private static Integer COOLDOWN_MINUTES = 15; public static Boolean isOpen() { Circuit_State__c state = Circuit_State__c.getOrgDefaults(); if(state.State__c == 'OPEN') { // Check if cooldown period has passed DateTime openedAt = state.Opened_At__c; if(openedAt.addMinutes(COOLDOWN_MINUTES) < DateTime.now()) { setState('HALF-OPEN'); return false; // Allow test call } return true; // Still in cooldown } return false; } public static void recordFailure() { Circuit_State__c state = Circuit_State__c.getOrgDefaults(); state.Failure_Count__c++; if(state.Failure_Count__c >= FAILURE_THRESHOLD) { setState('OPEN'); } update state; } }
🏭 Real World

Our BC integration uses Circuit Breaker — after 5 consecutive failures, the circuit opens for 15 minutes. During this time, all Queueable sync jobs return immediately with a "BC unavailable" status instead of making HTTP calls that would timeout. After cooldown, one test call determines if BC is back. This reduced timeout-related CPU limit errors by 90%.

Say This in Interview
"Circuit Breaker tracks consecutive failures in a Custom Setting — after the threshold, the circuit opens and all integration calls fail fast with a fallback response instead of waiting for HTTP timeouts. After a cooldown period it allows one test call, and closes the circuit on success."
Q47What is the Salesforce Composite API and when would you use it? Advanced
🔀The Composite API allows executing multiple REST API requests in a single HTTP call, with the ability to reference results from earlier requests in subsequent ones using reference IDs. It dramatically reduces the number of API round trips needed for complex multi-step operations.
📊 Composite API Resources
ResourceWhat It DoesMax Requests
/compositeUp to 25 requests, can chain results25
/composite/batchIndependent requests, no chaining25
/composite/sobjectsBulk create/update same object type200 records
/composite/treeCreate parent + children in one call200 records
// Example: Create Account, then create Contact linked to it — 1 API call { "compositeRequest": [ { "method": "POST", "url": "/services/data/v59.0/sobjects/Account", "referenceId": "newAccount", "body": {"Name": "ISPL Pharma"} }, { "method": "POST", "url": "/services/data/v59.0/sobjects/Contact", "referenceId": "newContact", "body": { "LastName": "Singh", "AccountId": "@{newAccount.id}" // Reference previous result! } } ] }
Say This in Interview
"Composite API batches up to 25 REST requests into one HTTP call with result chaining using @{referenceId.field} syntax — I use it when an external system needs to create a parent and children in one operation, eliminating round trips and staying within API rate limits."
Q48What is the OAuth 2.0 JWT Bearer Flow and when would you use it for server-to-server integration? Advanced
🔑JWT Bearer Flow enables server-to-server authentication without user interaction. The calling server signs a JWT assertion with a private key, exchanges it for a Salesforce access token — no user clicks "Allow Access." Used for automated integrations, scheduled jobs, and background processing where no user is present.
🔄 JWT Bearer Flow Steps
  • 1
    Create Connected App in Salesforce with "Use Digital Signature" enabled
  • 2
    Upload public key certificate to Connected App; private key stays on calling server
  • 3
    Calling server builds JWT payload: iss (client ID), sub (Salesforce username), aud (login URL), exp (expiry)
  • 4
    Server signs JWT with private key using RS256 algorithm
  • 5
    POST to Salesforce token endpoint with grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer and the signed JWT
  • 6
    Salesforce validates signature against stored public key, returns access token
Say This in Interview
"JWT Bearer Flow is for headless server-to-server auth — the external system signs a JWT with a private key, exchanges it for a Salesforce access token without any user interaction. I use it for scheduled integrations and background services where there's no user to click an OAuth consent screen."
Q49What is Mutual TLS (mTLS) and how does Salesforce support it? Advanced
🔐Mutual TLS requires BOTH the client and server to present certificates for authentication — unlike standard TLS where only the server presents a certificate. In Salesforce, you configure a Certificate and Key Pair in Setup, reference it in the Named Credential, and Salesforce automatically presents the client certificate in every callout.
📋 mTLS Setup in Salesforce
  • 1
    Setup → Certificate and Key Management → Create Self-Signed Certificate (or import CA-signed)
  • 2
    Export the certificate public key and share with the external system to trust it
  • 3
    Configure Named Credential → Certificate section → select the certificate
  • 4
    External system validates Salesforce's client certificate on every inbound callout
Standard TLSMutual TLS (mTLS)
Server presents certificateBoth client AND server present certificates
Client verifies server identityBoth sides verify each other's identity
Used for: HTTPS websitesUsed for: High-security API integrations
No client cert neededClient cert configured in Named Credential
Say This in Interview
"mTLS requires the calling system to present a client certificate that the server validates — not just the other way around. In Salesforce I configure a Certificate and Key Pair and reference it in the Named Credential — Salesforce automatically presents the certificate in every callout to the external system that requires it."
Q50What is the Correlation ID pattern in integrations and why is it important? Advanced
🔗A Correlation ID is a unique identifier generated at the start of a transaction and passed through every system involved in that transaction. It allows you to trace a single business transaction across multiple systems, log files, and API calls — essential for debugging failures in distributed integrations.
// Generate Correlation ID at integration start String correlationId = UUID.randomUUID().toString(); // Use Crypto for Apex-compatible UUID String corrId = EncodingUtil.convertToHex(Crypto.generateAESKey(128)); // Pass in every request header HttpRequest req = new HttpRequest(); req.setEndpoint('callout:BC_Integration/orders'); req.setHeader('X-Correlation-ID', corrId); req.setHeader('X-Source-System', 'Salesforce'); // Log with Correlation ID insert new Integration_Log__c( Correlation_Id__c = corrId, Request_Payload__c = req.getBody(), Status__c = 'Sent' );
🏭 Real World

Every BC sync call from our org includes X-Correlation-ID header with the Salesforce Order ID + timestamp. When BC raises a support issue, they share the Correlation ID — we instantly find the matching log in Salesforce and in BC's own logs to pinpoint exactly where in the chain the failure occurred.

Say This in Interview
"Correlation ID is a unique identifier passed as a header in every API call across all systems in a transaction — it enables end-to-end tracing of a single business event through Salesforce, middleware, and target systems. Without it, debugging distributed integration failures becomes guesswork."
Q51What are External Services in Salesforce and what are their limitations? Medium
🔌External Services let admins connect to external REST APIs declaratively — without Apex code — by importing an OpenAPI (Swagger) spec. The API operations become available as actions in Flow Builder. Use for simple, well-documented external APIs accessible to admins. Not suitable for complex authentication, binary data, or non-REST protocols.
FactorExternal ServicesCustom Apex Callout
Code required❌ None (declarative)✅ Apex required
Available in Flow✅ As Flow action✅ Via @InvocableMethod
Complex auth handling❌ Limited✅ Full control
Error handling❌ Basic✅ Granular
Binary/file payloads❌ Not supported✅ Supported
Non-REST APIs❌ REST only✅ Any protocol
Say This in Interview
"External Services let admins import an OpenAPI spec and use the API as a Flow action without any code — great for simple REST APIs with straightforward auth. I recommend them for admin-managed integrations but use Apex callouts when I need complex error handling, retry logic, binary payloads, or non-standard authentication."
Q52How does the Salesforce Bulk API 2.0 differ from Bulk API 1.0? Medium
📦Bulk API 2.0 simplifies large-scale data operations with a cleaner REST interface, automatic chunking, CSV-only format, and no need to manage batches manually. API 1.0 requires manual batch creation and supports both XML and CSV. Use 2.0 for new integrations — it's simpler and processes data faster.
FeatureBulk API 1.0Bulk API 2.0
Batch managementManual — create batches explicitly✅ Automatic — upload data, done
Data formatXML and CSVCSV only
Result accessPoll per batch✅ Single results download
Max records per jobConfigurable batchesUnlimited (auto-chunked)
PK Chunking✅ Supported❌ Not needed (auto)
Best forLegacy integrations✅ New integrations 2026
Say This in Interview
"Bulk API 2.0 eliminates manual batch management — you upload a CSV, Salesforce automatically chunks it and processes it. For new integrations I always use 2.0 unless I have a specific reason for 1.0 like PK chunking requirements. The simplified REST interface also makes debugging significantly easier."
Q53What is the Remote Process Invocation pattern in Salesforce integration? Advanced
🔄Remote Process Invocation is an integration pattern where Salesforce initiates a process in an external system and either waits for the result (synchronous) or receives it later via callback (asynchronous). It's the foundation pattern for most enterprise Salesforce-to-ERP integrations.
Pattern VariantFlowUse When
Sync Fire-and-WaitSF → External → Response → SFReal-time validation, short operations
Async Fire-and-ForgetSF → External (no wait)Long-running processes, notifications
Async with CallbackSF → External → (later) External → SFLong-running with result needed
🏭 Real World

Our BC order creation uses Async with Callback — when an Order is Approved in Salesforce, a Queueable fires (Fire) and sends it to BC (Forget). BC processes it and calls back our webhook with the BC Order Number. The webhook receiver updates the Salesforce Order with the BC reference. Total process takes 2-5 minutes but the user doesn't wait.

Say This in Interview
"Remote Process Invocation is the core pattern for Salesforce-to-external-system processing. I choose Sync Fire-and-Wait for real-time validation needs, and Async with Callback for longer operations — Salesforce fires the request, the external system processes it and calls back a webhook with the result."
Q54How do you expose an Apex class as a REST web service? What are the security considerations? Medium
🌐Use @RestResource(urlMapping='/endpoint/*') on a class and @HttpGet, @HttpPost etc. on static methods. The service is accessible at /services/apexrest/endpoint/. Security requires Connected App OAuth or Session ID authentication — the class runs in the context of the authenticated user.
@RestResource(urlMapping='/orders/*') global with sharing class OrderRestService { @HttpGet global static Order__c getOrder() { RestRequest req = RestContext.request; String orderId = req.requestURI.substring( req.requestURI.lastIndexOf('/') + 1 ); return [SELECT Id, Status__c, Amount__c FROM Order__c WHERE Id = :orderId WITH USER_MODE]; } @HttpPost global static String createOrder() { String body = RestContext.request.requestBody.toString(); Map<String, Object> payload = (Map<String, Object>) JSON.deserializeUntyped(body); // Process and create order... return JSON.serialize(new Map<String, String>{'status' => 'created'}); } }
🔐 Security Checklist
  • Always use with sharing — respects the caller's record visibility
  • Use WITH USER_MODE or Security.stripInaccessible() for FLS enforcement
  • Validate all input from requestBody — never trust external input
  • Restrict Connected App to specific IP ranges and OAuth scopes
  • Never expose via Salesforce Sites with Guest User unless explicitly intended for public access
Say This in Interview
"@RestResource exposes an Apex class as a REST endpoint under /services/apexrest/ — I always use 'with sharing' so the caller's record visibility is respected, WITH USER_MODE for FLS, validate all input from requestBody, and restrict the Connected App's OAuth scope to minimum necessary permissions."
Q55What is Event-Driven Architecture (EDA) and how does Salesforce implement it? Advanced
📡Event-Driven Architecture decouples systems so they communicate through events rather than direct calls. Salesforce implements EDA through Platform Events (custom), Change Data Capture (automatic record changes), and Streaming API — allowing real-time pub-sub communication between Salesforce, external systems, and LWC components.
Salesforce EDA ToolWhat It PublishesSubscribers
Platform EventsCustom business eventsApex triggers, Flow, external CometD
Change Data CaptureRecord create/update/delete/undeleteApex triggers, Flow, external CometD
Generic Streaming (PushTopic)Custom SOQL result changesExternal CometD, LWC EmpApi
🏭 Real World

Our architecture uses Platform Events as the integration backbone — when an Order status changes in Salesforce, we publish Order_Status_Changed__e. Multiple subscribers react independently: our warehouse system updates picking status, our customer portal updates the order tracker, and an LWC on the Salesforce Order record updates in real time via EmpApi. All decoupled — changing one subscriber never affects the others.

Say This in Interview
"Salesforce EDA uses Platform Events as the event bus — publishers don't know or care who subscribes. I use it to decouple Salesforce from downstream systems: one event publish triggers independent processing in warehouse systems, customer portals, and LWC components simultaneously without any direct coupling."
Q56How do you handle sensitive data like PII when designing Salesforce integrations? Advanced
🔐PII in integrations requires a multi-layer approach — data masking before logging, encryption in transit (TLS) and at rest (Shield), tokenisation for fields that don't need the raw value, and strict access controls on integration logs that capture request/response payloads.
✅ PII Protection Layers
  • 1️⃣Encryption at rest: Salesforce Shield Platform Encryption for sensitive fields — SSN, payment info, health data
  • 2️⃣TLS in transit: All callouts must use HTTPS — enforced by Named Credentials
  • 3️⃣Data masking in logs: Strip or mask PII from integration log payloads before inserting to Integration_Log__c
  • 4️⃣Tokenisation: Store a token reference, not the actual PII — external vault holds the real value
  • 5️⃣Minimal data principle: Only send fields the external system actually needs — never send full customer record
Say This in Interview
"PII in integrations needs layered protection — Shield encryption at rest, TLS in transit, masked payloads in integration logs, and tokenisation for fields the external system only needs to reference not read. I also apply minimum data principle — only send what the external system genuinely needs."
Q57What is Change Data Capture (CDC) and how does it differ from Platform Events? Medium
🔄Change Data Capture automatically generates change events for record creates, updates, deletes, and undeletes on standard and custom objects — no code needed to publish. Platform Events are manually published by Apex code or Flow. CDC is for "what changed in Salesforce data," Platform Events are for "what happened in my business process."
FactorChange Data CapturePlatform Events
Publishing✅ Automatic on record DMLManual — Apex or Flow
What it capturesField changes with old/new valuesAny payload you define
Retention72 hours for replay72 hours for replay
Objects supportedStandard + custom objectsCustom event definitions
SetupEnable in Setup → CDCDefine custom event object
Best forSync Salesforce data to external systemsBusiness process events
Say This in Interview
"CDC automatically publishes change events on every record DML with the changed field values — perfect for keeping external data warehouses in sync with Salesforce. Platform Events are for custom business events I explicitly publish. CDC is for 'data changed,' Platform Events are for 'something happened.'"
Q58How does Salesforce Connect and External Objects work? Advanced
🔗Salesforce Connect creates External Objects that look like Salesforce objects but retrieve data in real time from external systems via OData protocol or custom Apex adapters — without copying data into Salesforce. Data stays in the source system; Salesforce queries it on demand.
ApproachData LocationPerformance
Standard custom objectsIn Salesforce databaseFast — local query
External Objects (Connect)In external systemDepends on external API speed
✅ When to Use External Objects
  • Data too large to copy into Salesforce (millions of ERP records)
  • Data must stay in external system for compliance or governance reasons
  • Real-time accuracy required — cached data unacceptable
  • Not suitable for high-frequency queries — each access hits external API
  • SOQL features limited — no aggregate queries, no SOSL, limited relationships
Say This in Interview
"Salesforce Connect creates External Objects that proxy real-time queries to external systems via OData — data never copies into Salesforce. I use it when data is too large for Salesforce storage or must legally stay in the source system, accepting the tradeoff that every query hits the external API."
Q59How do you implement robust error handling for a complex multi-step integration? Advanced
🛡️Robust multi-step integration error handling requires: categorising error types (transient vs permanent), appropriate retry strategy per type, a persistent error log with full context, alerting for manual review, and idempotent operations so retries don't cause data duplication.
🔍 Error Classification
  • 🔄Transient (retry automatically): HTTP 429 (rate limited), 503 (service unavailable), timeout — retry with exponential backoff
  • Permanent (don't retry): HTTP 400 (bad request), 404 (not found), validation errors — log and alert for manual fix
  • ⚠️Ambiguous (retry with caution): HTTP 500 (server error) — retry limited times then escalate
🏭 Real World

Our BC integration handler categorises every HTTP response — 4xx errors go to "Permanent Failure" queue for manual review, 429 and 503 trigger automatic retry via Queueable chaining with exponential delay (1 min, 4 min, 16 min), and 5xx errors retry 3 times then escalate. Every outcome logs to BC_Integration_Log__c with full request, response, correlation ID, and retry count.

Say This in Interview
"I categorise integration errors into transient (auto-retry with backoff), permanent (alert for manual fix), and ambiguous (limited retry then escalate). Every outcome — success or failure — logs to a custom object with correlation ID so we have complete end-to-end visibility."
Q60What is the purpose of the System.JSONGenerator class and when would you use it over JSON.serialize()? Advanced
📄JSONGenerator provides streaming, fine-grained control over JSON construction — building the JSON string token by token. Use it when JSON.serialize() produces incorrect output (wrong field names, extra fields, null handling issues) or when you need to build complex nested JSON structures not directly mappable to Apex classes.
// JSON.serialize() — simple, automatic from Apex object Map<String, Object> simpleMap = new Map<String, Object>{ 'orderId' => 'ORD-001', 'amount' => 5000 }; String json1 = JSON.serialize(simpleMap); // JSONGenerator — fine control, complex nesting JSONGenerator gen = JSON.createGenerator(true); // true = pretty print gen.writeStartObject(); gen.writeStringField('orderId', 'ORD-001'); gen.writeNumberField('amount', 5000); gen.writeFieldName('lineItems'); gen.writeStartArray(); gen.writeStartObject(); gen.writeStringField('product', 'Bioreactor'); gen.writeNumberField('qty', 2); gen.writeEndObject(); gen.writeEndArray(); gen.writeEndObject(); String json2 = gen.getAsString();
Say This in Interview
"I use JSON.serialize() for standard Apex object-to-JSON conversion. I switch to JSONGenerator when I need precise control over field names (API may require camelCase), null field exclusion, complex nested array structures, or when the external API's required JSON format doesn't map cleanly to any Apex class structure."
💻
Advanced LWC & Performance
LWC internals, performance optimisation, and advanced component patterns
Q61–Q75
Q61What is the difference between @wire, @api, and @track decorators in LWC? Easy
🎯@api marks a property as public — parent components can pass data in and child can expose methods. @wire connects a property to an Apex method or wire adapter for reactive data fetching. @track (largely deprecated in modern LWC) made nested object changes reactive — all properties are now reactive by default.
DecoratorPurposeDirection
@apiPublic property — parent can pass dataParent → Child
@wireReactive data from Apex or wire adapterSalesforce → Component
@track (legacy)Deep reactivity for objects/arraysInternal only
No decoratorPrivate reactive property (modern default)Internal only
import { LightningElement, api, wire, track } from 'lwc'; import getOrders from '@salesforce/apex/OrderController.getOrders'; export default class OrderList extends LightningElement { @api recordId; // parent passes Account ID @wire(getOrders, { accountId: '$recordId' }) orders; // reactive wire isLoading = false; // private reactive (no decorator needed in modern) selectedOrder = {}; // object — also reactive without @track in modern }
Say This in Interview
"@api makes a property externally settable by parent components; @wire connects to Apex or a wire adapter for reactive data that updates when parameters change; @track is largely unnecessary in modern LWC since all properties are reactive by default — I only use it for legacy code compatibility."
Q62What is the difference between connectedCallback and renderedCallback in LWC? Medium
🔄connectedCallback fires once when the component is inserted into the DOM — use for one-time initialisation like event subscriptions and imperative Apex calls. renderedCallback fires after every render/re-render — use cautiously and always guard with a flag to avoid infinite loops.
export default class OrderComponent extends LightningElement { isInitialised = false; connectedCallback() { // Fires ONCE when component enters DOM this.subscribeToEvents(); // ✅ Subscribe once loadOrderData(); // ✅ One-time data load } renderedCallback() { // Fires after EVERY render — guard with flag! if(this.isInitialised) return; // ✅ Prevent repeat execution this.isInitialised = true; // Access rendered DOM elements const el = this.template.querySelector('.my-chart'); initChart(el); // ✅ DOM must be rendered first } disconnectedCallback() { // Fires when component removed from DOM this.unsubscribeFromEvents(); // ✅ Cleanup } }
Say This in Interview
"connectedCallback fires once on DOM insertion — perfect for subscriptions and one-time data loading. renderedCallback fires after every render so I always guard it with a boolean flag to prevent re-execution. disconnectedCallback is where I unsubscribe and clean up resources."
Q63How do you optimise an LWC that renders a large list of records? Advanced
Never render all records at once. Use pagination or virtual scrolling, lazy-load data as the user scrolls, debounce search and filter inputs, use cacheable Apex for data that changes infrequently, and avoid complex lwc:for loops with nested reactive computations.
✅ Performance Optimisation Techniques
  • 1️⃣Pagination: Load 20-50 records at a time — lightning-datatable enable-infinite-loading with onloadmore handler
  • 2️⃣cacheable=true: Mark Apex methods cacheable to leverage client-side cache — reduces repeat server calls
  • 3️⃣Debounce: 300ms delay on search inputs — prevents Apex call on every keystroke
  • 4️⃣Avoid nested reactive: Don't put heavy computations in getters that fire on every render
  • 5️⃣Lazy load child components: Use lwc:if to render child components only when needed
// Debounce search input handleSearchChange(event) { clearTimeout(this.searchTimer); this.searchTimer = setTimeout(() => { this.searchTerm = event.target.value; this.loadRecords(); // fires 300ms after user stops typing }, 300); }
Say This in Interview
"For large record lists I implement pagination or infinite scroll to limit DOM nodes, debounce search inputs to prevent call-per-keystroke, use cacheable Apex to reduce server round trips, and lazy-load child components only when needed — the goal is keeping the DOM lean and Apex calls minimal."
Q64How do you call an Apex method imperatively vs using @wire — when would you choose each? Medium
@wire is reactive — automatically re-calls when parameters change, ideal for data that should refresh when reactive properties change. Imperative calls use .then()/.catch() or async/await — use them for user-triggered actions, mutations (DML), or when you need precise control over when the call fires.
// @wire — reactive, automatic on param change @wire(getOrder, { orderId: '$recordId' }) orderData; // Re-calls automatically when recordId changes // Imperative — user-triggered, for mutations async handleSave() { try { this.isSaving = true; const result = await saveOrder({ orderData: this.orderData }); this.showToast('Success', 'Saved', 'success'); // Refresh wire cache after mutation await refreshApex(this.orderData); } catch(error) { this.error = error.body?.message; } finally { this.isSaving = false; } }
Use @wire whenUse Imperative when
Data loads on component initUser clicks a button
Should refresh when param changesPerforming DML (insert/update)
Read-only Apex (cacheable=true)Need loading state control
Say This in Interview
"@wire for read-only reactive data that should auto-refresh when component properties change; imperative for user-triggered actions and DML where I need precise control over the loading state. After a DML imperative call I use refreshApex() to invalidate the @wire cache so the component shows fresh data."
Q65How does the LWC shadow DOM affect CSS styling and how do you style child components? Advanced
🎨LWC uses Shadow DOM to encapsulate component CSS — styles defined in a component's CSS file only apply to elements within that component, not its children. To style child component internals you must use CSS Custom Properties (variables), the :host selector, or SLDS utility classes.
/* Parent component CSS — cannot pierce child shadow */ .my-button { color: red; } /* Only affects THIS component's template */ /* ✅ CSS Custom Properties — cross shadow boundary */ /* Parent defines the variable */ :host { --primary-color: #6C63FF; } /* Child component uses the variable */ .child-button { color: var(--primary-color); } /* ✅ :host selector — style the component's root element */ :host { display: block; margin: 10px; } :host(.special) { border: 2px solid red; } /* When parent adds class */
✅ Styling Approaches
  • CSS Custom Properties: Define variable in parent, child reads with var() — crosses shadow boundary
  • SLDS utility classes: Shared Salesforce styles available everywhere
  • @api styleClass: Expose a public property for parent to pass CSS class names
  • CSS selectors cannot pierce Shadow DOM — .parentClass .childElement won't work across components
Say This in Interview
"LWC Shadow DOM encapsulates CSS — parent styles can't reach inside child component DOM. I use CSS Custom Properties to pass styling across the boundary: parent defines the variable, child reads it with var(). For the component's own root element I use :host selector, and SLDS utilities work everywhere."
Q66What is the LWC wire adapter pattern and how would you create a custom wire adapter? Advanced
🔌A custom wire adapter is a JavaScript class implementing the wire adapter protocol — it receives a wire context and can push data reactively to the @wire property. Used for creating reusable reactive data sources like custom streaming connections, localStorage, or complex computed data.
// custom-adapter.js — wire adapter implementation import { register, ValueChangedEvent } from '@lwc/wire-service'; const adapterId = {}; // Unique identifier function CustomAdapter(dataCallback) { this.dataCallback = dataCallback; } CustomAdapter.prototype.update = function(config) { // Called when @wire params change this.dataCallback({ data: fetchData(config.param), error: undefined }); }; CustomAdapter.prototype.connect = function() { // Called when component inserts into DOM }; CustomAdapter.prototype.disconnect = function() { // Cleanup when component removed from DOM }; register(adapterId, CustomAdapter); export { adapterId as customAdapter };
Say This in Interview
"A custom wire adapter implements connect, disconnect, and update methods — update fires when @wire parameters change and pushes new data via the callback. I'd build one for streaming real-time data from a custom source or wrapping a complex computation into a reusable reactive pattern."
Q67How do you access and interact with standard Salesforce navigation and record utilities in LWC? Medium
🧭LWC provides NavigationMixin for navigation between pages and records, and the lightning/uiRecordApi wire adapters for reading and writing records without custom Apex. Together they handle most standard record operations declaratively.
import { NavigationMixin } from 'lightning/navigation'; import { getRecord, updateRecord, createRecord } from 'lightning/uiRecordApi'; import ORDER_OBJECT from '@salesforce/schema/Order__c'; import STATUS_FIELD from '@salesforce/schema/Order__c.Status__c'; export default class OrderActions extends NavigationMixin(LightningElement) { @api recordId; // Read record via wire @wire(getRecord, { recordId: '$recordId', fields: [STATUS_FIELD] }) order; navigateToOrder() { this[NavigationMixin.Navigate]({ type: 'standard__recordPage', attributes: { recordId: this.recordId, actionName: 'view' } }); } async approveOrder() { const fields = { Id: this.recordId, [STATUS_FIELD.fieldApiName]: 'Approved' }; await updateRecord({ fields }); } }
Say This in Interview
"I use NavigationMixin for programmatic page navigation and lightning/uiRecordApi for standard record CRUD without custom Apex — getRecord for reading, updateRecord for field updates, createRecord for new records. These leverage Salesforce's built-in cache and respect sharing rules automatically."
Q68What are Slots in LWC and how do they enable component composition? Advanced
🧩Slots are placeholders in a component's template that parents can fill with their own markup. They enable component composition — a base layout component can define slots for header, body, and footer that parent components fill with different content, making the base component reusable across different contexts.
<!-- base-card.html — defines slots --> <template> <div class="card"> <div class="header"> <slot name="header">Default Header</slot> <!-- named slot --> </div> <div class="body"> <slot></slot> <!-- default slot --> </div> <div class="footer"> <slot name="footer"></slot> </div> </div> </template> <!-- parent-page.html — fills slots --> <template> <c-base-card> <span slot="header">Order Summary</span> <!-- fills named slot --> <p>Order content goes here</p> <!-- fills default slot --> <lightning-button slot="footer" label="Save"></lightning-button> </c-base-card> </template>
Say This in Interview
"LWC Slots allow parents to inject content into specific positions of a child component's template — enabling true composition where a base layout component is reused with different content in each context. Named slots target specific positions, the default slot catches any non-named content."
Q69How does data binding work in LWC and what is the difference between one-way and two-way binding? Easy
🔄LWC uses one-way data binding by default — data flows from component to template but not back. Two-way binding (like Aura's) doesn't exist natively in LWC. Instead, LWC uses events to communicate changes from template to component, maintaining a unidirectional data flow.
<!-- One-way binding — value shown in template from JS property --> <lightning-input label="Name" value={customerName}></lightning-input> <!-- customerName changes don't update automatically — need event --> <!-- Capture changes via event handler --> <lightning-input label="Name" value={customerName} onchange={handleNameChange}> <!-- ← event to capture changes </lightning-input> // JS — handle the event and update property handleNameChange(event) { this.customerName = event.target.value; // manual "two-way" }
Say This in Interview
"LWC has one-way data binding — JavaScript properties flow into the template but changes in the template don't automatically update the property. I achieve two-way binding manually by handling change events and updating the property in JavaScript — this explicit pattern makes data flow more predictable and debuggable."
Q70How do you implement keyboard accessibility in a custom LWC component? Advanced
Keyboard accessibility in LWC requires proper ARIA attributes, tabindex management, keyboard event handlers for Enter/Space/Arrow keys on custom interactive elements, and focus management — especially important for modal dialogs, custom dropdowns, and data tables that deviate from native HTML elements.
<!-- Custom dropdown with keyboard support --> <div role="combobox" aria-expanded={isOpen} aria-haspopup="listbox" tabindex="0" onkeydown={handleKeyDown}> {selectedLabel} </div> // JS — handle keyboard interactions handleKeyDown(event) { switch(event.key) { case 'Enter': case ' ': // Space this.toggleDropdown(); event.preventDefault(); break; case 'ArrowDown': this.focusNextOption(); event.preventDefault(); break; case 'Escape': this.closeDropdown(); break; } }
Say This in Interview
"Keyboard accessibility in custom LWC requires ARIA role and state attributes, tabindex on focusable elements, and onkeydown handlers for Enter, Space, Arrow keys, and Escape. I follow WCAG patterns — custom dropdowns need full keyboard navigation, modals need focus trapping, and interactive elements need visible focus indicators."
Q71How do you implement a Toast notification in LWC? Easy
🔔Use the ShowToastEvent from lightning/platformShowToastEvent — dispatch it with title, message, variant (success/error/warning/info), and optional mode (dismissable/sticky/pester). Works inside Lightning Experience record pages and App Builder pages — not in standalone pages or Communities by default.
import { ShowToastEvent } from 'lightning/platformShowToastEvent'; // Success toast showSuccessToast(message) { this.dispatchEvent(new ShowToastEvent({ title: 'Success', message: message, variant: 'success', mode: 'dismissable' })); } // Error toast with details showErrorToast(error) { this.dispatchEvent(new ShowToastEvent({ title: 'Error Saving Record', message: error?.body?.message || 'Unknown error', variant: 'error', mode: 'sticky' // stays until dismissed })); }
Say This in Interview
"ShowToastEvent dispatched from the component bubbles up to the Lightning page framework which renders the toast notification. I use 'success' variant for confirmations, 'error' with sticky mode for failures that need user acknowledgment, and always include the actual error message from error.body.message for actionable feedback."
Q72What is the difference between custom events and standard DOM events in LWC? Medium
📡Standard DOM events (click, change, focus) are native browser events. Custom events in LWC are created with new CustomEvent('name', {detail: data}) — they bubble up the component tree. Key difference: custom events don't cross shadow DOM boundaries by default unless bubbles and composed are both set to true.
// Dispatch custom event from child handleSelect(event) { this.dispatchEvent(new CustomEvent('orderselected', { detail: { orderId: this.orderId, orderName: this.orderName }, bubbles: true, // bubbles up DOM tree composed: false // stays within LWC shadow boundary (default) })); } <!-- Parent listens with on + event name --> <c-child-component onorderselected={handleOrderSelected}></c-child-component> // Parent handler handleOrderSelected(event) { const { orderId, orderName } = event.detail; }
Say This in Interview
"Custom events carry business data in event.detail and bubble up the component tree. I set bubbles:true when a grandparent needs to listen without the parent explicitly forwarding. composed:true lets events cross shadow DOM boundaries — I use this sparingly as it exposes internal component events to the outside."
Q73How do you subscribe to and publish Lightning Message Service (LMS) channels in LWC? Medium
📻Lightning Message Service enables pub-sub communication between unrelated LWC, Aura, and Visualforce components on the same Lightning page. Create a Message Channel metadata file, import publish/subscribe functions, and use a shared messageContext wire adapter for the communication context.
// Publisher component import { publish, MessageContext } from 'lightning/messageService'; import ORDER_SELECTED from '@salesforce/messageChannel/OrderSelected__c'; @wire(MessageContext) messageContext; notifyOrderSelected(orderId) { publish(this.messageContext, ORDER_SELECTED, { orderId }); } // Subscriber component import { subscribe, unsubscribe, MessageContext } from 'lightning/messageService'; @wire(MessageContext) messageContext; subscription = null; connectedCallback() { this.subscription = subscribe( this.messageContext, ORDER_SELECTED, (msg) => { this.handleOrderSelected(msg.orderId); } ); } disconnectedCallback() { unsubscribe(this.subscription); // always unsubscribe! }
Say This in Interview
"LMS requires a Message Channel metadata file, the MessageContext wire adapter for scope, and publish/subscribe functions. I always unsubscribe in disconnectedCallback — failing to do so causes memory leaks as the subscription persists after the component is removed from the DOM."
Q74How do you implement a loading spinner pattern in LWC while Apex data loads? Easy
Use a boolean isLoading property toggled around the Apex call. For @wire, check if the wire result's data and error are both undefined to detect loading state. Display lightning-spinner conditionally using lwc:if — always handle both success and error states.
<!-- template --> <template> <template lwc:if={isLoading}> <lightning-spinner alternative-text="Loading..." size="medium"></lightning-spinner> </template> <template lwc:if={hasData}> <!-- render records --> </template> <template lwc:if={error}> <p class="error">{errorMessage}</p> </template> </template> // JS — imperative call pattern async loadData() { this.isLoading = true; this.error = null; try { this.records = await getOrderData({ accountId: this.recordId }); } catch(err) { this.error = err.body?.message; } finally { this.isLoading = false; // always hide spinner } }
Say This in Interview
"I toggle an isLoading boolean in a try-finally block — set true before the Apex call, false in finally so the spinner always hides even on error. The finally block is critical — without it an error leaves the spinner running forever and the component unusable."
Q75What is the Content Security Policy (CSP) in LWC and how does it affect loading external libraries? Advanced
🔒Salesforce enforces strict Content Security Policy that blocks loading scripts from external domains by default. To use third-party JavaScript libraries in LWC, you must upload them as Static Resources — you cannot use CDN URLs directly in LWC scripts or templates.
// ❌ CANNOT load from CDN in LWC // <script src="https://cdn.example.com/lib.js"> — BLOCKED // ✅ Upload to Static Resources first // Setup → Static Resources → Upload your library import { LightningElement } from 'lwc'; import { loadScript, loadStyle } from 'lightning/platformResourceLoader'; import CHART_JS from '@salesforce/resourceUrl/ChartJS'; renderedCallback() { if(this.chartLoaded) return; this.chartLoaded = true; Promise.all([loadScript(this, CHART_JS)]) .then(() => { this.initChart(); // Chart.js available globally now }) .catch(error => { console.error('Failed to load library', error); }); }
Say This in Interview
"Salesforce CSP blocks external script loading in LWC — I upload third-party libraries as Static Resources and use loadScript() from lightning/platformResourceLoader in renderedCallback with a loaded guard flag. This makes the library available globally after loading, and I can then initialise charts or other library-dependent code."
🔐
Advanced Security, Sharing & Permissions
Deep-dive security architecture, Shield, permission design, and audit
Q76–Q88
Q76What is the Security.stripInaccessible() method and how does it differ from WITH SECURITY_ENFORCED? Advanced
🔐Security.stripInaccessible() silently removes fields a user cannot access from query results or DML inputs — returning only accessible data. WITH SECURITY_ENFORCED throws a QueryException if the user lacks access to any queried field. Strip is more flexible; SECURITY_ENFORCED fails loudly.
// stripInaccessible — silent, flexible SObjectAccessDecision dec = Security.stripInaccessible( AccessType.READABLE, [SELECT Id, Name, Salary__c, SSN__c FROM Contact] ); List<Contact> safe = dec.getRecords(); Map<String, Set<String>> removed = dec.getRemovedFields(); // Salary__c and SSN__c stripped if no read access — no exception // Also works for DML — strip fields user can't write SObjectAccessDecision writeDec = Security.stripInaccessible( AccessType.UPDATABLE, contactsToUpdate ); update writeDec.getRecords(); // Only updates fields user can edit
FeaturestripInaccessible()WITH SECURITY_ENFORCED
On violationSilently removes fieldThrows QueryException
Works on DML✅ Yes (UPDATABLE/CREATABLE)❌ Read only
Reports removed fields✅ getRemovedFields()❌ Exception only
API versionv49.0+v48.0+
Say This in Interview
"Security.stripInaccessible() is my preferred FLS enforcement — it silently strips inaccessible fields from both queries and DML, returns what was removed via getRemovedFields(), and never crashes the operation. WITH SECURITY_ENFORCED fails the entire query if any field is inaccessible, which is useful when partial data would be worse than no data."
Q77What is Salesforce Shield and what three capabilities does it provide? Advanced
🛡️Salesforce Shield is a premium security add-on providing three capabilities: Platform Encryption (encrypt data at rest at the field level), Event Monitoring (detailed audit logs of user activity), and Field Audit Trail (track field value history beyond the standard 18-month limit — up to 10 years).
Shield FeatureWhat It DoesKey Benefit
Platform EncryptionEncrypts field values at rest using AES-256PII protection, compliance (HIPAA, GDPR)
Event MonitoringLogs every user action — login, report export, API callSecurity threat detection, insider threat
Field Audit TrailTrack up to 60 fields, history up to 10 yearsRegulatory compliance, litigation hold
⚠️ Platform Encryption Limitations
  • Encrypted fields cannot be used in SOQL WHERE clauses (non-deterministic encryption)
  • Encrypted fields cannot be used in formula fields or roll-up summaries
  • Deterministic encryption option allows exact-match filtering but is less secure
Say This in Interview
"Shield adds three enterprise security capabilities — Platform Encryption for field-level AES-256 encryption at rest, Event Monitoring for complete user activity audit logs, and Field Audit Trail for 10-year field history retention. The key trade-off with Platform Encryption is that non-deterministically encrypted fields can't be filtered in SOQL WHERE clauses."
Q78How do Profiles and Permission Sets differ, and what is a Permission Set Group? Medium
🔑A Profile is mandatory — every user has exactly one — it defines baseline permissions and restrictions. Permission Sets add permissions on top of the profile. Permission Set Groups bundle multiple Permission Sets that can be assigned together. Salesforce's modern approach uses minimal-permission profiles + Permission Set Groups for all customisation.
FeatureProfilePermission SetPS Group
Required per user✅ Exactly one❌ Optional, many❌ Optional
Can restrict permissions✅ Yes❌ Additive only❌ Additive only
Login hours/IP✅ Yes❌ No❌ No
Muting Permission SetN/A❌ No✅ Yes — remove within group
Best practice 2026Minimum baselineSpecific permissionsBundle for job roles
Say This in Interview
"In 2026 best practice is minimal profiles for baseline restrictions and Permission Set Groups for job-role-based permissions — PSGs bundle related Permission Sets and can use Muting Permission Sets to selectively remove permissions within the group. This is far more maintainable than bloated profiles with hundreds of permissions."
Q79What is Apex Managed Sharing and when is it necessary? Advanced
🔐Apex Managed Sharing (programmatic sharing) is necessary when standard declarative sharing tools — OWD, role hierarchy, sharing rules, manual sharing — cannot implement the required sharing logic because it depends on complex runtime conditions that change dynamically.
✅ When You Need Apex Managed Sharing
  • 1️⃣Share record with a specific user only if they are a team member on a related record — logic too complex for criteria-based rules
  • 2️⃣Grant access to records based on many-to-many relationship (junction object) — standard rules can't handle this
  • 3️⃣Time-based sharing — grant access only during a project's active dates, revoke after completion
  • 4️⃣Sharing based on external system data — e.g., grant access to orders only if user is assigned in BC
// Custom object sharing — requires Sharing Reason defined in Setup Project__Share share = new Project__Share(); share.ParentId = projectId; share.UserOrGroupId = teamMemberId; share.AccessLevel = 'Edit'; share.RowCause = Schema.Project__Share.RowCause.Team_Member__c; Database.insert(share, false);
Say This in Interview
"Apex Managed Sharing is required when sharing logic depends on runtime conditions that standard declarative tools can't evaluate — like sharing a record with all members of a related team junction object, or implementing time-based access that automatically revokes. It requires custom Sharing Reasons on custom objects to persist through owner changes."
Q80What are the different ways a user can gain access to a record in Salesforce? Medium
🔓Record access in Salesforce is determined by a layered model — Salesforce evaluates all paths and grants the highest level of access found across any of them. A user can gain access through multiple independent paths simultaneously.
Access PathHowLevel
Record OwnershipUser owns the recordFull access
Role HierarchyManager in role hierarchy above ownerRead/Edit (if OWD allows)
OWD Public Read/WriteOrganisation-wide defaultRead or Edit
Sharing RulesCriteria-based or ownership-basedRead or Edit
Manual SharingRecord owner shares directlyRead or Edit
TeamsAccount/Opportunity team membershipConfigurable
Apex Managed SharingProgrammatic share recordRead, Edit, or Full
Portal/Community accessContact linked to community userBased on role
Say This in Interview
"Salesforce evaluates all record access paths — ownership, role hierarchy, OWD, sharing rules, manual shares, team membership, and Apex sharing — and grants the highest access level found across any path. This additive model means you can't restrict access once it's granted through any path, which is why OWD must be the most restrictive baseline."
Q81What is the TxnSecurity.EventCondition interface and when would you implement it? Advanced
🚨TxnSecurity.EventCondition is a custom Apex implementation of Transaction Security policies — it evaluates whether a specific user activity (login, report run, API query, data export) should trigger an action (block, multi-factor auth, notify). More flexible than declarative policies for complex conditional logic.
public class LargeExportPolicy implements TxnSecurity.EventCondition { public boolean evaluate(SObject event) { ReportEvent re = (ReportEvent) event; // Block if user exports more than 10,000 records if(re.RowsProcessed > 10000) { // Log the suspicious activity insert new Security_Alert__c( User__c = re.UserId, Event_Type__c = 'Large Export', Rows__c = re.RowsProcessed ); return true; // condition met — trigger the policy action } return false; } }
Say This in Interview
"TxnSecurity.EventCondition implements custom Transaction Security policy logic — the evaluate() method receives the event, applies complex conditional logic, and returns true to trigger the policy action (block, MFA, notify). I use it when declarative conditions aren't expressive enough, like blocking reports that export more than a threshold of records outside business hours."
Q82How do you troubleshoot "Insufficient Privileges" errors that are not obvious from the user's profile? Advanced
🔍Systematic troubleshooting requires checking all layers: object permissions, field permissions, record sharing, page layout, button/action visibility, and Apex class access. Most "Insufficient Privileges" errors have multiple potential causes and require testing each layer systematically.
🔍 Diagnostic Checklist
  • 1
    Check Object permissions — does profile/PS grant Read/Create/Edit/Delete on the object?
  • 2
    Check Field permissions — is a required field invisible or read-only for this profile?
  • 3
    Check record sharing — who owns the record, what's the OWD, are there sharing rules that should apply?
  • 4
    Run as user — Setup → Users → Login As (if enabled) to reproduce exactly
  • 5
    Check Apex class access — @AuraEnabled methods require the class to be accessible to the user's profile/PS
  • 6
    Check permission dependencies — some permissions require "View All Data" or specific system permissions
Say This in Interview
"I troubleshoot Insufficient Privileges by checking layers — object permissions, field permissions, record sharing, Apex class access, and system permissions. The fastest method is using Login As the affected user to reproduce exactly, then enabling debug logs to see which permission check is failing in the Apex or system code."
Q83What is the difference between Audit Trail and Event Monitoring in Salesforce? Medium
📋Setup Audit Trail records changes to Salesforce configuration (who changed what setup, when). Event Monitoring (Shield) records user activity events (who ran which report, which records were viewed, which APIs were called). Audit Trail is for configuration change tracking; Event Monitoring is for user behaviour security monitoring.
FeatureSetup Audit TrailEvent Monitoring (Shield)
What it tracksSetup/configuration changesUser activity events
ExamplesProfile changed, field added, rule modifiedLogin, report export, API call, record view
Retention180 days1-30 days (varies by event type)
Requires Shield❌ Free, built-in✅ Shield add-on required
AccessSetup → Audit TrailEventLogFile sObject via SOQL/API
Say This in Interview
"Audit Trail is free and built-in — tracks setup changes like who modified a profile or created a field. Event Monitoring requires Shield and tracks user behaviour — which reports were exported, which records were viewed, which API calls were made. Audit Trail answers 'who changed the org config'; Event Monitoring answers 'what did users do with the data'."
Q84What is the Named Credential and how does it handle OAuth token refresh automatically? Medium
🔑Named Credentials store endpoint URL and authentication details for callouts — keeping credentials out of Apex code. For OAuth flows, Salesforce automatically handles token refresh — when an access token expires, Salesforce re-authenticates using the stored refresh token before the next callout, transparently to Apex code.
// With Named Credential — clean, secure HttpRequest req = new HttpRequest(); req.setEndpoint('callout:BC_Integration/api/orders'); // Salesforce handles: URL, OAuth token, token refresh automatically // Without Named Credential — credential management burden req.setEndpoint('https://api.businesscentral.com/orders'); req.setHeader('Authorization', 'Bearer ' + getAccessToken()); // You must manually manage token lifecycle
✅ Benefits of Named Credentials
  • No credentials in Apex code — admins manage authentication without code changes
  • Automatic token refresh for OAuth flows — no manual token lifecycle management
  • Remote Site Setting automatically created — no separate whitelist needed
  • ⚠️Reset token after sandbox refresh — OAuth tokens don't carry over to refreshed sandboxes
Say This in Interview
"Named Credentials store the endpoint and OAuth configuration — Salesforce handles token storage, refresh, and injection into callout headers transparently. Apex code just uses 'callout:NC_Name' as the endpoint. The only operational task is re-authenticating after sandbox refresh since tokens don't carry over."
Q85What is a Connected App and what are its key security configurations? Medium
🔌A Connected App registers an external application with Salesforce for OAuth-based authentication. Key security configurations include OAuth scopes (minimum required), IP relaxation settings, refresh token policy, and whether user approval is required — misconfiguring these can expose the org to security risks.
Security SettingBest Practice
OAuth ScopesMinimum required — never "Full access"
IP Relaxation"Enforce IP restrictions" unless integration needs flexibility
Refresh Token PolicySet expiry — never "Refresh token is valid until revoked" for public clients
Permitted Users"Admin approved users are pre-authorized" for server integrations
CertificateUse for JWT Bearer Flow instead of client secret
Say This in Interview
"A Connected App is the OAuth registration for an external system — I configure minimum required scopes, enforce IP restrictions where possible, set refresh token expiry, and use certificate-based authentication for server integrations instead of client secrets which can be rotated awkwardly."
Q86What is the OAuth 2.0 Client Credentials Flow and when would you use it? Advanced
🔑Client Credentials Flow authenticates using client ID and secret only — no user context, no user consent screen. Used for machine-to-machine server integrations where the external system needs access to Salesforce data on its own behalf, not on behalf of any specific user.
OAuth FlowUser ContextUse Case
Authorization Code✅ Specific userUser-facing apps (mobile, web)
JWT Bearer✅ Specific user (pre-authorised)Automated server-to-server
Client Credentials❌ No user — integration userPure machine-to-machine
Username-Password✅ Specific user (legacy)❌ Avoid — credentials exposed
Say This in Interview
"Client Credentials Flow is for machine-to-machine integrations where there's no user involved — the external system authenticates with client ID and secret and gets a token scoped to an integration user. I use it for background data sync jobs. For user-context integrations I use JWT Bearer Flow instead."
Q87What is Salesforce Event Monitoring and how can it detect insider threats? Advanced
🔍Event Monitoring (Shield) records user activity in EventLogFile objects — every login, report run, record view, data export, and API call creates a log entry. Query these via SOQL or download CSV to detect anomalies: unusual report exports, mass record access, off-hours API activity, or access from unexpected IP ranges.
// Query recent report run events List<EventLogFile> reportLogs = [ SELECT LogDate, LogFileLength, LogFile FROM EventLogFile WHERE EventType = 'Report' AND LogDate = LAST_N_DAYS:7 ]; // EventLogFile CSV contains: // USER_ID, REPORT_ID, ROWS_EXPORTED, TIMESTAMP, SOURCE_IP // Query: who exported more than 10,000 rows in last 24 hours?
🚨 Threat Detection Patterns
  • 🔴Mass export: User exports thousands of records to CSV — potential data exfiltration
  • 🔴Off-hours access: Logins at 3 AM from unusual geography — potential compromised credentials
  • 🔴Privilege spike: User suddenly queries sensitive objects they've never accessed before
Say This in Interview
"Event Monitoring logs every user action to EventLogFile objects — I analyse these via SOQL or Tableau to detect anomalies like mass data exports, off-hours logins, or sudden access to sensitive records. Combined with Transaction Security policies, it can automatically block suspicious activity in real time."
Q88How do you implement a robust password reset policy and session security in Salesforce? Medium
🔐Session and password security is configured in Setup → Security Controls → Session Settings and Password Policies. Key settings include session timeout, IP locking, HTTPS enforcement, clickjack protection, and MFA enforcement — all critical for enterprise security compliance.
SettingRecommended ValueWhy
Session Timeout2 hoursBalance usability vs security
Lock session to IP✅ EnablePrevent session hijacking
Force HTTPS✅ Always onEncrypt all traffic
MFA✅ Enforce for allRequired by Salesforce since 2022
Password history12 passwordsPrevent password reuse
Max invalid logins5 attemptsBrute force protection
Clickjack protection✅ Allow framing by same originPrevent UI redressing attacks
Say This in Interview
"Salesforce session security combines IP-locked sessions, 2-hour timeout, enforced HTTPS, and MFA — which Salesforce mandates for all users since 2022. Password policy should enforce minimum length, complexity, history (12 passwords), and account lockout after 5 failed attempts to prevent brute-force attacks."
🚀
Advanced DevOps & Deployment
SFDX, Unlocked Packages, CI/CD, destructive changes, and ALM patterns
Q89–Q100
Q89What are Unlocked Packages and how do they differ from Managed Packages? Advanced
📦Unlocked Packages are a first-generation packaging format where source code remains editable in the subscriber org. Managed Packages lock and protect code (IP protection) and support upgrades via version. Use Unlocked for internal modular deployments, Managed for distributable AppExchange products.
FeatureUnlocked PackageManaged Package
Code visibility✅ Visible, editable❌ Protected/hidden
Namespace required❌ Optional✅ Required
AppExchange listing❌ Internal use only✅ Yes
Upgrade support✅ Limited✅ Full version management
Modify installed code✅ Yes❌ No
Best forInternal modular ALMISV / AppExchange products
Say This in Interview
"Unlocked Packages enable modular deployment for internal orgs — package your Sales Cloud customisation separately from Service Cloud, deploy them independently, and edit code in the subscriber org. Managed Packages protect IP for AppExchange distribution. For enterprise ALM without an AppExchange goal, Unlocked Packages are the modern best practice."
Q90How do you delete a custom field or Apex class from a Production org using SFDX? Advanced
🗑️Deleting metadata from Production requires a destructiveChanges.xml file in your deployment package. This file lists components to delete — Salesforce processes deletions after successful deployment of the rest of the package. Change Sets cannot delete metadata — SFDX/Metadata API is required.
# destructiveChanges.xml — in deployment package <?xml version="1.0" encoding="UTF-8"?> <Package xmlns="http://soap.sforce.com/2006/04/metadata"> <types> <members>OldClass</members> <name>ApexClass</name> </types> <types> <members>Account.Old_Field__c</members> <name>CustomField</name> </types> </Package> # Deploy with SFDX — destructive changes AFTER regular deploy sf project deploy start \ --manifest package.xml \ --post-destructive-changes destructiveChanges.xml \ --target-org production
⚠️ Important Rules
  • ⚠️Custom fields deleted in Production go to Recycle Bin — recoverable within 15 days
  • ⚠️Delete all references (in other Apex, Flows, Reports) before deleting the component
  • ⚠️Change Sets CANNOT delete components — SFDX or Metadata API only
Say This in Interview
"Change Sets can't delete metadata — I use destructiveChanges.xml in an SFDX deployment to remove components from Production. I always use --post-destructive-changes so deletions happen after the deployment succeeds, and I verify all references to the component are removed first to avoid deployment failures."
Q91What is Quick Deploy in Salesforce and when does it become available? Medium
Quick Deploy allows deploying to Production without re-running all test classes — using cached test results from a recent validation. It becomes available when a validation run completes successfully with 75%+ test coverage and zero failures within the last 10 days, saving significant deployment time.
FactorStandard DeployQuick Deploy
Test executionRuns all testsUses cached results
Time to deployMinutes to hoursSeconds to minutes
When availableAlwaysAfter successful validation in last 10 days
RiskLow — fresh test runTests may be stale — 10-day window
✅ Best Practice for Production Deployments
  • 1
    Run validation (not deployment) during low-traffic hours — tests run but nothing deploys
  • 2
    Once validation passes, use Quick Deploy during maintenance window — nearly instant
  • 3
    Minimises Production downtime from long test runs during business hours
Say This in Interview
"Quick Deploy uses cached test results from a validation run completed within the last 10 days — it skips re-running all test classes and deploys almost instantly. My release process: validate during off-hours to warm the cache, then Quick Deploy during the maintenance window for minimal Production downtime."
Q92What is the difference between a full sandbox, partial copy, developer, and developer pro sandbox? Easy
📋Sandbox types differ in data volume, metadata fidelity, and refresh frequency. Full sandbox is the only type with complete production data — critical for realistic performance testing. Developer sandboxes are free and unlimited in number but have minimal data storage.
Sandbox TypeDataStorageRefreshBest For
DeveloperNone (metadata only)200MBDailyDevelopment
Developer ProNone (metadata only)1GBDailyComplex dev
Partial CopySample (up to 5GB)5GB5 daysQA testing
FullComplete copySame as Prod29 daysUAT, performance
Say This in Interview
"I use Developer sandboxes for daily development — free, refresh daily, enough for code and config testing. Partial Copy for QA with realistic data samples. Full sandbox only for UAT and performance testing because it has complete production data but can only refresh every 29 days and is expensive to provision."
Q93What happens to Named Credentials and OAuth tokens after a sandbox refresh? Advanced
⚠️After a sandbox refresh, OAuth tokens stored in Named Credentials are invalidated — the refreshed sandbox is a new org with a new org ID, so existing tokens are no longer valid. You must re-authenticate each Named Credential that uses OAuth after every sandbox refresh.
✅ Post-Refresh Checklist
  • ⚠️Named Credentials (OAuth): Re-authenticate each one — click "Edit" and re-authorise
  • ⚠️External Credentials: Re-generate client secrets or re-enter API keys
  • ⚠️Scheduled Jobs: Re-schedule any jobs aborted by refresh
  • ⚠️Email Deliverability: Set to "System Email Only" in sandboxes — prevent accidental emails to real customers
  • ⚠️Custom Settings: Data in Custom Settings refreshes from Production — verify integration endpoints point to sandbox systems not production
Say This in Interview
"OAuth tokens in Named Credentials are invalidated on sandbox refresh because the org ID changes — I maintain a post-refresh runbook: re-authenticate Named Credentials, re-schedule aborted jobs, set Email Deliverability to System Only, and verify Custom Setting endpoints point to sandbox external systems not production ones."
Q94How do you handle a deployment that fails due to test class failures in Production? Advanced
🔧Failed deployments due to test failures are rolled back automatically — no partial deployment. Debug by reading the exact test failure message, reproduce in sandbox, fix the root cause (usually code that doesn't cover an edge case or has hardcoded references), and re-deploy after all tests pass in sandbox.
🔍 Common Test Failure Causes in Production
  • Hardcoded IDs/Names: Test uses hardcoded record types or queue names that exist in sandbox but not production
  • Missing test data: Test queries existing records instead of creating its own — production data structure differs
  • Validation rule conflicts: New test data fails a validation rule existing in production but not sandbox
  • External callout not mocked: Test doesn't implement HttpCalloutMock — callouts not allowed in test context
  • Fix: Run tests in a Full sandbox that mirrors production before deploying
Say This in Interview
"Production test failures are almost always environment-specific issues — hardcoded IDs, missing test data isolation, or validation rules that only exist in production. I prevent this by running the full test suite in a Full sandbox before any Production deployment, and ensuring all tests use Test.isRunningTest() guards and create their own test data rather than querying existing records."
Q95What is the Salesforce Tooling API and how does it differ from the Metadata API? Advanced
🔧Tooling API provides fine-grained access to Salesforce metadata for development tooling — run Apex anonymously, query ApexClass source code, set trace flags, run specific tests, and access code coverage. Metadata API handles bulk deployment of metadata packages. Tooling API is for development automation; Metadata API is for deployments.
FeatureTooling APIMetadata API
Run Apex anonymously✅ ExecuteAnonymous❌ No
Query Apex source code✅ ApexClass object❌ Retrieve only
Set debug trace flags✅ TraceFlag object❌ No
Bulk component deployment❌ One at a time✅ Yes
Code coverage data✅ ApexCodeCoverage❌ No
Use caseIDE tooling, CI automationRelease deployments
Say This in Interview
"Tooling API is for development automation — running Apex, querying class source, setting trace flags, checking coverage data. I use it in CI pipelines to programmatically run specific test classes and retrieve coverage metrics. Metadata API handles bulk deployments. VS Code's Salesforce extension uses the Tooling API under the hood for its execute anonymous and coverage features."
Q96How do you write a test class for Apex that makes HTTP callouts? Advanced
🧪Apex tests cannot make real HTTP callouts. Implement HttpCalloutMock interface and set it with Test.setMock() — Salesforce intercepts the callout and returns your mock response. For multiple endpoints, use a routing mock that checks the request URL and returns different responses per endpoint.
// Step 1 — Implement mock public class BCMockCallout implements HttpCalloutMock { public HttpResponse respond(HttpRequest req) { HttpResponse res = new HttpResponse(); res.setStatusCode(200); res.setHeader('Content-Type', 'application/json'); res.setBody('{"orderId":"BC-001","status":"created"}'); return res; } } // Step 2 — Use in test class @isTest static void testOrderSync() { Test.setMock(HttpCalloutMock.class, new BCMockCallout()); Test.startTest(); BCIntegration.syncOrder(orderId); Test.stopTest(); // Assert order was processed Order__c updated = [SELECT BC_Order_Id__c FROM Order__c WHERE Id = :orderId]; System.assertEquals('BC-001', updated.BC_Order_Id__c); }
Say This in Interview
"Apex tests can't make real callouts — I implement HttpCalloutMock and register it with Test.setMock() before calling the code under test. For integrations with multiple endpoints I build a routing mock that inspects request.getEndpoint() and returns different status codes and payloads to test both success and error handling paths."
Q97How do you implement version control for Salesforce metadata using SFDX and Git? Advanced
📁SFDX source format breaks metadata into individual files — one file per component — making Git diffs meaningful and merge conflicts manageable. The workflow is: retrieve from org → commit to Git → deploy to target org, with CI/CD pipelines automating testing and deployment.
✅ SFDX Git Workflow
  • 1
    sf project generate — create project with source tracking enabled for scratch orgs
  • 2
    sf project retrieve start — pull changes from org into local SFDX format files
  • 3
    git add, commit, push — version control in feature branch
  • 4
    Pull request → CI pipeline runs tests in scratch org automatically
  • 5
    Merge → CD pipeline deploys to UAT sandbox, then Production after approval
# Pull metadata from org to local sf project retrieve start --manifest package.xml --target-org dev-sandbox # Deploy to target org sf project deploy start --manifest package.xml --target-org uat-sandbox # Run all tests and check coverage sf project deploy start --manifest package.xml --test-level RunAllTestsInOrg
Say This in Interview
"SFDX source format makes every metadata component a discrete file — Apex classes, fields, and flows are individual files that Git tracks and diffs cleanly. My workflow is feature-branch development with source-tracked scratch orgs, automated test runs on PR via CI, and CD pipeline deployments to sandbox then production with mandatory approval gates."
Q98What is a Scratch Org and how does it differ from a Sandbox? Medium
🏗️A Scratch Org is a temporary, configurable Salesforce org created from definition files — lives 1-30 days, has no production data, and is provisioned in seconds via CLI. Sandboxes are long-lived copies of production. Scratch Orgs are for CI/CD and feature development; Sandboxes are for UAT and testing with real data.
FactorScratch OrgSandbox
Lifespan1-30 days (temporary)Permanent (until refreshed)
DataEmpty (code only)Copy of production data
Provisioning timeSecondsMinutes to hours
Features configurable✅ Via definition file❌ Mirrors production
Source tracking✅ Built-in❌ Manual
Best forCI/CD, isolated devUAT, QA with real data
Say This in Interview
"Scratch Orgs are ephemeral — created from a JSON definition file in seconds, used for isolated feature development or CI test runs, and discarded after. Sandboxes are persistent copies of production for UAT and QA. I use Scratch Orgs for daily development with source tracking and sandboxes for QA testing with realistic production data."
Q99What is metadata drift between environments and how do you detect and prevent it? Advanced
📊Metadata drift occurs when environments diverge over time — admins make changes directly in Production or UAT that were never committed to version control or deployed to other environments. It causes unexpected deployment failures and environment inconsistencies that are hard to diagnose.
🔍 Detection Methods
  • 1️⃣SFDX source status: sf project deploy preview compares local source to org and shows differences
  • 2️⃣Tooling API audit: Query ApexClass.LastModifiedDate to find classes modified in org but not in source control
  • 3️⃣Third-party tools: Copado, Gearset, Flosum provide visual environment comparison reports
✅ Prevention Strategy
  • Lock Production changes — all changes must go through source control and deployment pipeline
  • Source-tracked scratch orgs for development — all changes captured automatically
  • Weekly automated comparison between Git and production to detect direct changes
Say This in Interview
"Metadata drift is the enemy of reliable deployments — I prevent it by treating Production as read-only for configuration changes, routing everything through source control and deployment pipelines. I detect existing drift using SFDX's deploy preview command or Gearset's org comparison feature, then reconcile by retrieving the drift back to source control."
Q100What is the OAuth 2.0 Device Flow and when is it used in Salesforce? Advanced
📱The Device Flow authenticates users on devices without browsers or keyboards (smart TVs, CLI tools, IoT devices). The device displays a code, the user authenticates on a separate device with a browser, and the original device polls for the access token. Used by Salesforce CLI for terminal authentication.
🔄 Device Flow Steps
  • 1
    Device requests a user code from Salesforce OAuth endpoint
  • 2
    Salesforce returns user_code and verification_uri
  • 3
    User goes to verification_uri on phone/laptop and enters user_code
  • 4
    Device polls token endpoint until user completes authorisation
  • 5
    Salesforce returns access_token to the device
Say This in Interview
"Device Flow is for browserless environments — the CLI, IoT devices, or terminal tools. When you run 'sf org login web' in Salesforce CLI, it uses Device Flow — it opens a browser separately for authentication, then the CLI polls for the token. It's the OAuth flow that bridges the gap between headless devices and browser-based authentication."
⚙️
Advanced Flows, Automation & Governor Limits
Flow internals, transaction boundaries, bulkification, and automation architecture
Q101–Q113
Q101What is the bulkification problem in Flows and how do Record-Triggered Flows handle it? Advanced
Record-Triggered Flows are automatically bulkified by Salesforce — when 200 records trigger the flow, Salesforce groups them and processes the flow logic once for the batch rather than 200 individual invocations. However, Flow logic that makes SOQL queries or DML in loops can still hit governor limits if not designed carefully.
⚠️ Flow Bulkification Pitfalls
  • Get Records inside a Loop: Each iteration issues a SOQL query — 200 iterations = 200 SOQL queries — hits 100 limit
  • Create Records inside a Loop: Each iteration does DML — 200 iterations = 200 DML statements
  • Fix: Get Records BEFORE the loop, store in collection variable, reference collection inside loop
  • Fix: Use Assignment elements inside loop to build a collection, then single Create/Update Records AFTER loop
🏭 Real World

Our Order approval Flow originally had a Get Records element inside the loop to fetch Account details per order. When a user approved 50 orders at once, it issued 50 SOQL queries and hit the limit. We moved the Account query before the loop using a collection filter — single SOQL for all accounts, then reference the collection inside the loop.

Say This in Interview
"Salesforce automatically bulkifies Record-Triggered Flows — they process a batch of records together. The pitfall is SOQL or DML inside loops — each loop iteration consumes a limit. Best practice is Get Records before the loop to fetch all needed data upfront, use a collection inside the loop, and perform all DML in a single Create/Update element after the loop."
Q102How do you call Apex from a Flow and what are @InvocableMethod requirements? Medium
🔌Annotate a static public Apex method with @InvocableMethod to make it available as an Action in Flow Builder. The method must accept a List of inputs and return a List of outputs — this enables bulkification. InvocableVariable annotations mark the fields on input/output wrapper classes.
public class OrderSyncAction { public class Input { @InvocableVariable(required=true label='Order ID') public Id orderId; } public class Output { @InvocableVariable(label='BC Order Number') public String bcOrderNumber; @InvocableVariable(label='Success') public Boolean isSuccess; } @InvocableMethod(label='Sync Order to BC' description='Sends order to Business Central') public static List<Output> syncOrder(List<Input> inputs) { List<Output> results = new List<Output>(); for(Input inp : inputs) { Output out = new Output(); out.bcOrderNumber = callBC(inp.orderId); out.isSuccess = out.bcOrderNumber != null; results.add(out); } return results; } }
Say This in Interview
"@InvocableMethod exposes an Apex method to Flow Builder as a custom action. The method must be static, accept List inputs, and return List outputs — this List pattern is required for Flow bulkification. @InvocableVariable marks the fields on wrapper classes that Flow can map to and from variables."
Q103What is the order of execution in Salesforce when a record is saved? Advanced
📋Salesforce processes a record save through a defined sequence — understanding this order is critical for debugging unexpected behaviour when triggers, validation rules, workflows, and flows interact on the same object.
📋 Order of Execution (Simplified)
  • 1
    Load original record from database (or initialise for new records)
  • 2
    Apply new field values from the page/API request
  • 3
    Run System Validation (required fields, field format, foreign key checks)
  • 4
    Run Before-Save Flows (Record-Triggered Flows, before save)
  • 5
    Run Before Triggers (Apex trigger, before insert/update)
  • 6
    Run Custom Validation Rules
  • 7
    Save record to database (but not committed)
  • 8
    Run After Triggers (Apex trigger, after insert/update)
  • 9
    Run Assignment Rules, Auto-Response Rules, Escalation Rules
  • 10
    Run After-Save Flows (Record-Triggered Flows, after save)
  • 11
    Commit to database — send emails, publish platform events
Say This in Interview
"The key order is: system validation → before-save Flows → before triggers → validation rules → save to DB → after triggers → after-save Flows → commit. Before-save Flows run before before triggers, which surprises many developers. Understanding this order is critical when debugging why a validation rule fires before your trigger logic runs."
Q104What are before-save Flows and when would you use them over Apex triggers? Advanced
Before-save Record-Triggered Flows run before the record is saved to the database — they can update the record's fields without an additional DML statement. They are faster than after-save flows because they don't trigger a separate database write. Use for field defaulting, calculations, and validation that doesn't need related record access.
FactorBefore-Save FlowAfter-Save FlowBefore Trigger
Update triggering record✅ Direct (no DML)⚠️ Requires update DML✅ Direct (no DML)
Access related records❌ Limited✅ Yes✅ Yes
Make callouts❌ No❌ No (async only)❌ No (before triggers)
Performance⚡ FastestSlower (extra DML)Fast
Complexity limitLow-mediumMediumAny
Say This in Interview
"Before-save Flows are ideal for field calculations and defaulting — they update the triggering record's fields without a DML statement by modifying the record in memory before it's written. I use them over Apex triggers when the logic is simple enough for Flow and doesn't need related record access, saving a DML operation and improving performance."
Q105What is trigger recursion and how do you prevent it without using a static boolean? Advanced
🔄Trigger recursion occurs when a trigger's DML causes the same trigger to fire again, creating an infinite loop until governor limits are hit. The simple static boolean fix breaks bulkification. The robust solution tracks processed record IDs in a static Set to allow re-processing of different records while preventing re-processing of the same record.
// ❌ NAIVE FIX — static boolean breaks bulkification public class TriggerHelper { public static Boolean isRunning = false; } // If trigger fires for batch of 200, boolean blocks all subsequent records // ✅ PROPER FIX — track processed IDs public class TriggerHelper { public static Set<Id> processedIds = new Set<Id>(); } // In trigger List<Order__c> toProcess = new List<Order__c>(); for(Order__c o : Trigger.new) { if(!TriggerHelper.processedIds.contains(o.Id)) { toProcess.add(o); TriggerHelper.processedIds.add(o.Id); } } // Only processes records not already handled this transaction
Say This in Interview
"A static boolean prevents all recursion including legitimate re-processing of different records in the same transaction. I use a static Set of processed IDs — if a record's ID is already in the Set, skip it. New records not in the Set process normally. This prevents the same record from being processed twice while allowing different records to proceed."
Q106When would you use a Flow over an Apex Trigger and vice versa? Medium
🤔Salesforce recommends Flows first, Apex when needed. Use Flow for declarative automation maintainable by admins. Use Apex when logic is too complex for Flow, requires complex data manipulation, needs callouts, requires precise governor limit control, or must be unit tested with mocks.
ScenarioFlowApex Trigger
Field update on save✅ Before-save FlowPossible but overkill
Send email on condition✅ FlowPossible but overkill
Complex collection manipulation❌ Difficult✅ Apex
HTTP callout needed❌ Can't directly✅ Apex (async)
Complex branching with 10+ conditions⚠️ Maintainable but complex✅ Easier to test
Admin needs to modify logic✅ Flow (no code needed)❌ Requires developer
Say This in Interview
"Flow first for anything an admin might need to maintain — field updates, email alerts, record creation, simple branching. Apex when the logic needs callouts, complex collection processing, precise error handling with custom exceptions, or mock-based unit testing. The test in 2026 is: can this logic be clearly expressed in Flow without becoming a spaghetti diagram?"
Q107What is the Apex CPU Time limit and what are common causes of exceeding it? Advanced
⏱️Apex CPU time limit is 10,000ms for synchronous transactions and 60,000ms for asynchronous. It measures actual CPU processing time — not wall clock time. Waiting for SOQL or callouts doesn't count. Common causes: inefficient loops, complex SOQL queries, String manipulation in loops, regex operations, and JSON serialization of large objects.
🔍 Common CPU Hogs
  • String concatenation in loops: Use List.join() or StringBuilder pattern — String + String in loop is O(n²)
  • SOQL in a loop: Even if query is fast, 150 queries × processing = CPU overload
  • Complex regex on large strings: Pattern.matches() is CPU-intensive
  • JSON.deserialize on large payloads: Use JSON.deserializeUntyped() for partial parsing
  • Monitor: Limits.getCpuTime() returns current CPU ms consumed — add checkpoints to identify bottleneck
Say This in Interview
"CPU time measures actual processing work — SOQL wait time doesn't count. I profile using Limits.getCpuTime() at checkpoints to find the bottleneck. The biggest CPU culprits are String concatenation in loops, complex regex, and nested loops. I fix by moving to List-based operations, Map lookups instead of contains(), and breaking heavy processing into async chunks."
Q108What is the Heap Size limit in Apex and what causes HeapException? Advanced
💾Heap size limit is 6MB for synchronous Apex and 12MB for asynchronous. HeapException is thrown when objects in memory exceed this limit. Common causes: querying too many fields on large record sets, deserializing large JSON responses, and accumulating records across batch execute() calls without Database.Stateful awareness.
✅ Heap Reduction Strategies
  • SELECT only needed fields: Never SELECT * equivalent — every extra field consumes heap
  • Process and discard: In for loops, process records and null the list before querying the next batch
  • Use Database.QueryLocator: Batch Apex with QueryLocator streams records — doesn't load all into heap at once
  • Transient variables: Mark Visualforce controller variables transient if not needed between page requests
  • Monitor: Limits.getHeapSize() / Limits.getLimitHeapSize() to track consumption
Say This in Interview
"Heap size is memory for all objects in the transaction — 6MB synchronous, 12MB async. I prevent HeapException by querying only needed fields, processing and nulling large lists immediately after use, and using Database.QueryLocator in batch Apex which streams records instead of loading the entire dataset into memory at once."
Q109What are transient variables in Apex controllers and when do you use them? Medium
🔄Transient variables in Visualforce controllers are not serialized into the view state — they reset to null on every page request. Use for large data collections, passwords, calculated values that can be re-computed, and any data that doesn't need to persist between postbacks. Reduces view state size.
public class OrderController { // Persists in view state — needed across postbacks public Id selectedOrderId { get; set; } // Transient — re-queried on each request, not in view state public transient List<Order__c> allOrders { get; set; } // Transient — never persist sensitive data in view state public transient String apiToken { get; set; } public void loadOrders() { // Re-query on each postback allOrders = [SELECT Id, Name, Amount__c FROM Order__c]; } }
Say This in Interview
"Transient variables exclude data from Visualforce view state — essential for large record lists that would bloat view state beyond the 135KB limit and for sensitive values like API tokens that should never be serialized. The tradeoff is they must be re-loaded on every postback, so I use them for data that can be efficiently re-queried."
Q110How do you use Salesforce Flow as an API? Advanced
🔌Autolaunched Flows can be invoked via REST API at /services/data/vXX.0/actions/custom/flow/FlowAPIName — external systems can trigger Flows, pass input variables, and receive output variables. This allows exposing Flow logic as API endpoints without Apex code.
// Invoke Flow via REST API (external system calling Salesforce) POST /services/data/v59.0/actions/custom/flow/Process_Order_Approval Authorization: Bearer {access_token} Content-Type: application/json { "inputs": [{ "OrderId": "a0X000000001AbcEAE", "ApproverNotes": "Approved for Q4 budget" }] } // Response: { "actionName": "Process_Order_Approval", "outputValues": { "IsSuccess": true, "BCOrderNumber": "BC-20026-001" } }
🏭 Real World

Our customer portal calls a Salesforce Autolaunched Flow via REST API when customers submit orders. The Flow validates the order, creates the Salesforce record, triggers the BC sync, and returns the order number — all without a custom Apex REST endpoint. Admins can modify the Flow logic without any code change on the portal side.

Say This in Interview
"Autolaunched Flows exposed via the Actions API become REST endpoints callable by external systems — input variables as request body, output variables in the response. This lets admins build and modify API logic without Apex, and lets external systems trigger complex Salesforce processes without a custom Apex REST class."
Q111How do you design a scalable trigger framework for a large Salesforce implementation? Advanced
🏗️A trigger framework separates trigger routing from business logic — one trigger per object delegates to a handler class. This prevents multiple triggers on the same object, enables easy bypass mechanism, supports enable/disable per trigger context, and makes testing each handler class independently possible.
// Single trigger per object trigger OrderTrigger on Order__c ( before insert, before update, before delete, after insert, after update, after delete, after undelete ) { new OrderTriggerHandler().run(); } // Handler extends TriggerHandler base class public class OrderTriggerHandler extends TriggerHandler { public override void beforeInsert() { OrderValidationService.validate(Trigger.new); } public override void afterInsert() { OrderSyncService.syncToBC(Trigger.newMap); } } // Base class handles routing + bypass public virtual class TriggerHandler { public void run() { if(isBypassed()) return; // check Custom Metadata bypass flag if(Trigger.isBefore && Trigger.isInsert) beforeInsert(); if(Trigger.isAfter && Trigger.isInsert) afterInsert(); // etc... } protected virtual void beforeInsert() {} protected virtual void afterInsert() {} }
Say This in Interview
"I use a trigger framework with one trigger per object that delegates to a handler class extending a base TriggerHandler. The base class handles context routing, bypass logic via Custom Metadata flags (useful for data migrations), and recursion prevention. Each business operation lives in a service class called from the handler — testable in isolation."
Q112What is the difference between Process Builder and Flow in 2026, and what is Salesforce's direction? Easy
📊Salesforce has announced Process Builder and Workflow Rules are being retired. All automation should be built in Flow Builder going forward. Salesforce has provided migration tools to convert existing Workflow Rules and Process Builder automations to Flows — new features are no longer added to Process Builder.
ToolStatus in 2026Action
Workflow Rules⚠️ RetiringMigrate to Record-Triggered Flow
Process Builder⚠️ RetiringMigrate to Record-Triggered Flow
Flow Builder✅ Active, primary toolAll new automation here
Apex Triggers✅ ActiveFor complex logic, callouts
Say This in Interview
"Salesforce is retiring Workflow Rules and Process Builder — all existing automations should be migrated to Flow Builder using Salesforce's migration tools. In 2026 I build all new declarative automation in Flow Builder exclusively, and I prioritise migrating any Process Builder automations I encounter to Flow as part of org modernisation work."
Q113How do you handle exceptions in a Flow and surface them to the user? Medium
🚨Flows handle errors using Fault connectors on each element — on failure, the fault path captures {!$Flow.FaultMessage}. For Apex Actions, the Apex code must throw an AuraHandledException to surface a human-readable message to the Flow's fault path and ultimately to the user.
// In Apex called from Flow — throw AuraHandledException for clean message @InvocableMethod public static List<Output> processOrder(List<Input> inputs) { try { // process... } catch(Exception e) { // AuraHandledException message surfaces to Flow fault path throw new AuraHandledException( 'Order processing failed: ' + e.getMessage() ); } } // In Flow: // 1. Connect Fault path from the Action element // 2. Use {!$Flow.FaultMessage} variable in a Screen element // 3. Or add a Custom Error Message to show the fault message
Say This in Interview
"Flow errors surface via Fault connectors on each element — the fault path captures {!$Flow.FaultMessage}. For Apex Actions I throw AuraHandledException with a user-friendly message — it's the only exception type that surfaces cleanly to Flow's fault path. Unhandled exceptions in Apex show a generic error; AuraHandledException shows my specific message."
📊
Advanced Reports, Dashboards & Data Management
Report types, dashboard architecture, data quality, and large data volumes
Q114–Q125
Q114What is a custom Report Type and when would you create one? Medium
📊A custom Report Type defines which objects and fields are available in reports of that type — including the relationships between objects and which records are included (with or without related records). Create one when standard Report Types don't include the objects, fields, or relationships your report needs.
✅ When to Create Custom Report Types
  • 1️⃣Report needs fields from a custom object or custom fields not in standard Report Types
  • 2️⃣Report needs a specific join — e.g., "Accounts WITH Opportunities" vs "Accounts WITH or WITHOUT Opportunities"
  • 3️⃣Report needs to span more than 2 object levels (up to 4 objects in a custom Report Type)
  • 4️⃣Need to expose formula fields or specific field sets not available in standard types
🏭 Real World

We created a custom Report Type for "Orders with Line Items and Products" — our standard Order Report Type didn't include Line Item fields or the Product's Division__c field. The custom type lets sales managers see every line item sold per order with the product division for territory analysis in one report.

Say This in Interview
"Custom Report Types define the object relationships and field availability for reports — I create them when standard types don't include my custom objects/fields, when I need a specific 'with or without' join condition, or when I need to span more than two related objects. The Report Type is the schema; the Report is the query on top of it."
Q115What is the running user in a Dashboard and how does it affect data visibility? Medium
👤The Dashboard running user determines whose record visibility is applied when the dashboard data is calculated. If the running user has "View All Data," all viewers see all records regardless of their own permissions. Dynamic Dashboards run as the logged-in user — each viewer sees their own data.
Dashboard TypeRunning UserData SeenUse When
Static Running UserSpecific user (e.g., CEO)CEO's data visibilityExecutive view, shared KPIs
Dynamic DashboardLogged-in userEach user's own dataRep performance, personal metrics
Run as specified userAdmin-definedThat user's visibilityTeam lead sharing their view
Say This in Interview
"Static dashboards run as a fixed user — useful for leadership dashboards where everyone needs the same org-wide view. Dynamic Dashboards run as the logged-in user — each rep sees only their own pipeline. The running user is the critical security gate for dashboards; setting it incorrectly can expose data users shouldn't see."
Q116What is a Joined Report and when would you use it? Advanced
🔗A Joined Report combines multiple report blocks — each from a different Report Type — side by side in one view. Use it when you need to compare data from different objects or different subsets of the same object in a single report view without creating multiple separate reports.
✅ Joined Report Use Cases
  • 1️⃣Compare Won Opportunities vs Lost Opportunities side by side in one report
  • 2️⃣Show Cases and Opportunities for the same account together
  • 3️⃣Compare current year orders vs previous year orders in one view
⚠️ Limitations of Joined Reports
  • Cannot be used as source for standard Dashboard components
  • Cannot be exported to a format that preserves the block structure
  • Limited cross-block formula support
Say This in Interview
"Joined Reports let me combine up to 5 report blocks from different Report Types in one view — ideal for side-by-side comparisons like Won vs Lost opportunities or current year vs last year orders. The key limitation is they can't be used as dashboard source reports, so I use them for ad-hoc analysis but not for operational dashboards."
Q117What is Large Data Volume (LDV) in Salesforce and how do you design for it? Advanced
📦Large Data Volume scenarios involve millions of records on a single object — typically 1M+ records. Standard SOQL, triggers, and reports begin to degrade. Design for LDV requires custom indexes, skinny tables, archiving strategies, and careful query design to maintain performance.
✅ LDV Design Principles
  • 1️⃣Custom indexes: Request Salesforce Support to index frequently-filtered fields — reduces full table scans
  • 2️⃣Skinny tables: Salesforce can create custom skinny tables with only frequently-queried fields — bypasses full record row access
  • 3️⃣Archiving: Move historical records to BigObjects or external storage — keep active dataset small
  • 4️⃣Avoid sharing on large objects: Complex sharing rules on millions of records cause heavy share table growth
  • 5️⃣Batch sizes: Use smaller batch sizes in Batch Apex to reduce lock contention
Say This in Interview
"LDV design requires working with Salesforce Support for custom indexes and skinny tables, archiving historical records to BigObjects, avoiding complex sharing on high-volume objects, and designing all queries to be selective. On objects with 5M+ records I run SOQL performance analysis to confirm query plans are index-driven before production deployment."
Q118What is a BigObject in Salesforce and what are its limitations? Advanced
🏛️BigObjects store massive amounts of data — billions of records — optimised for historical archiving and large-scale analytics. They have a distinct query language (SOQL-like but with restrictions), cannot use standard DML (use insertImmediate() or Bulk API), and don't support triggers, workflows, or standard reports.
FeatureStandard ObjectBigObject
Record capacityMillionsBillions
Triggers✅ Supported❌ Not supported
DMLStandard insert/update/deleteinsertImmediate() or Bulk API
SOQLFull SOQL supportLimited — index fields only in WHERE
Standard Reports✅ Yes❌ No
Update records✅ Yes❌ Upsert via composite index only
Say This in Interview
"BigObjects are for billion-scale archiving — I use them to store historical transaction logs or audit records that must be retained for compliance but are rarely queried. The key constraints are no triggers, no standard reports, and SOQL must filter exclusively on indexed fields. I access BigObject data via SOQL in Apex or the Bulk API for exports."
Q119How do you implement cross-object roll-up calculations when standard roll-up summary fields aren't available? Advanced
🔢Standard Roll-Up Summary fields only work on master-detail relationships. For lookup relationships or complex aggregations, use DLRS (Declarative Lookup Rollup Summaries) for admin-managed rollups, or Apex triggers that query child records and update the parent aggregate field on insert/update/delete of children.
// Apex rollup — update Account total when Order changes trigger OrderTrigger on Order__c (after insert, after update, after delete) { Set<Id> accountIds = new Set<Id>(); for(Order__c o : (Trigger.isDelete ? Trigger.old : Trigger.new)) { if(o.Account__c != null) accountIds.add(o.Account__c); } List<AggregateResult> totals = [ SELECT Account__c, SUM(Amount__c) total FROM Order__c WHERE Account__c IN :accountIds GROUP BY Account__c ]; List<Account> accsToUpdate = new List<Account>(); for(AggregateResult ar : totals) { accsToUpdate.add(new Account( Id = (Id)ar.get('Account__c'), Total_Order_Value__c = (Decimal)ar.get('total') )); } update accsToUpdate; }
Say This in Interview
"For lookup relationship rollups I either use DLRS for admin-managed declarative rollups or an Apex trigger pattern — collect parent IDs from trigger context, aggregate child records via SOQL with GROUP BY, and update the parent field in one DML. I always handle insert, update, AND delete contexts to keep the rollup accurate."
Q120What is the External ID field and how is it used in data loading and integrations? Medium
🔑An External ID is a custom field marked as unique that stores an identifier from an external system. It enables upsert operations — insert if the external ID doesn't exist, update if it does — without knowing the Salesforce record ID. Also used for relationship mapping during data loads and as an indexed field for fast lookups.
// Upsert using External ID — no need to know Salesforce record ID Order__c order = new Order__c(); order.BC_Order_Number__c = 'BC-2026-001'; // External ID field order.Amount__c = 50000; order.Status__c = 'Approved'; // Upsert uses BC_Order_Number__c as the matching key Database.upsert(order, Order__c.BC_Order_Number__c, false); // Creates if BC-2026-001 doesn't exist, updates if it does // Also used for relationship mapping in Data Loader // Instead of Salesforce Account ID, reference Account by external ID // Account:External_ID__c = "ACC-001"
Say This in Interview
"External ID fields enable upsert without knowing Salesforce record IDs — essential for integrations where the external system has its own IDs. I also use them for relationship mapping during data loads (reference parent by external ID instead of Salesforce ID) and as indexed fields for fast lookups in integration callout responses."
Q121What is data skew in Salesforce and how does it affect performance? Advanced
⚠️Data skew occurs when one record in a parent-child relationship has an unusually large number of child records — e.g., one Account with 100,000 related Contacts. It causes lock contention when many concurrent transactions modify children of the same parent, and performance degradation in sharing recalculation.
Skew TypeProblemMitigation
Account Ownership SkewOne user owns 100K+ Accounts — sharing calculation overloadReassign to multiple users or groups
Parent-Child SkewOne Account has 10K+ Contacts — lock contention on savesReduce direct children, use intermediate objects
Lookup SkewMany records point to same lookup targetCustom index on lookup field
Say This in Interview
"Data skew causes lock contention — when 50 concurrent API calls all try to update children of the same Account, they queue waiting for the parent lock. I detect skew by querying child counts per parent and address it by distributing ownership across multiple users/groups or redesigning the data model to avoid deep parent-child concentration."
Q122How do you implement duplicate management in Salesforce beyond the standard duplicate rules? Advanced
🔄Standard Duplicate Rules use Matching Rules to identify duplicates at save time and either block or alert. For more sophisticated duplicate management — fuzzy matching, cross-object deduplication, bulk deduplication of existing records — use third-party tools like Cloudingo or custom Apex that queries potential matches during integration.
✅ Deduplication Architecture
  • 1️⃣Standard Matching Rules: Exact and fuzzy match on specified fields — configured in Setup
  • 2️⃣Duplicate Rules: Action (block/alert) when Matching Rule finds a match
  • 3️⃣Custom Apex deduplication: Before trigger queries for potential matches using key fields and merges or flags duplicates
  • 4️⃣Third-party tools: Cloudingo, DemandTools for mass deduplication of existing records
// Custom pre-insert dedup check in trigger List<Lead> potentialDups = [ SELECT Id, Email FROM Lead WHERE Email IN :incomingEmails AND IsConverted = false LIMIT 1000 ]; Map<String, Lead> emailToLead = new Map<String, Lead>(); for(Lead l : potentialDups) emailToLead.put(l.Email, l);
Say This in Interview
"Standard Duplicate Rules handle real-time prevention well. For integration deduplication I add a before-trigger check that queries existing records by email or external ID before insert. For bulk cleanup of existing duplicates I use third-party tools — Apex merge() is available but requires careful handling of related records and master selection logic."
Q123What is a Skinny Table in Salesforce and when would you request one? Advanced
A Skinny Table is a Salesforce-internal optimisation where a custom subset of frequently-queried fields is stored in a separate narrow table — queries on those fields scan this small table instead of the full wide record table. Requested via Salesforce Support for objects with millions of records experiencing persistent query timeouts.
✅ When to Request a Skinny Table
  • 1️⃣Object has 1M+ records and queries consistently perform poorly despite custom indexes
  • 2️⃣The same 5-10 fields are queried in most SOQL queries on this object
  • 3️⃣Custom indexes have not resolved the performance issue
⚠️ Skinny Table Limitations
  • ⚠️Only available for custom objects and specific standard objects
  • ⚠️Cannot include fields from related objects
  • ⚠️Adds slight overhead on writes — Salesforce must update both tables
Say This in Interview
"Skinny Tables are a last-resort performance optimisation for very large objects — Salesforce creates an internal narrow table with only your most-queried fields. I'd request one only after custom indexes haven't resolved query timeouts on a multi-million record object. The process requires a Salesforce Support case with performance data justification."
Q124How do you implement row-level formula fields that reference user session data? Advanced
🔢Formula fields can reference special merge fields related to the running user — $User.Id, $User.Manager, $Profile.Name — enabling per-user computed values. Combine with IF() and ISPICKVAL() for conditional display. These formulas evaluate dynamically at access time based on the logged-in user's context.
/* Formula: Show "MY ACCOUNT" badge if running user is the owner */ IF(OwnerId = $User.Id, "⭐ MY ACCOUNT", "") /* Formula: Show territory name only to managers */ IF($Profile.Name = "Sales Manager", Territory__c, "Restricted" ) /* Formula: Days since last contact, flag if overdue */ IF( TODAY() - Last_Contact_Date__c > 30, "⚠️ OVERDUE", TEXT(TODAY() - Last_Contact_Date__c) & " days" )
Say This in Interview
"Formula fields using $User.Id or $Profile.Name evaluate dynamically per logged-in user — the same field shows different values to different users based on their context. I use this for ownership indicators, conditional visibility of sensitive fields, and SLA status calculations that depend on the current date relative to stored dates."
Q125What is the difference between record types and page layouts? How do they interact? Easy
📋Record Types segment records by business process — each type has its own picklist values, page layouts, and can trigger different processes. Page Layouts control which fields are visible and required on the record detail page. Record Types and Page Layouts are linked via Profile assignment — each profile gets one layout per record type.
FeatureRecord TypePage Layout
Controls picklist values✅ Yes❌ No
Controls field visibility❌ No✅ Yes
Controls required fields❌ No✅ Yes
Enables separate processes✅ Yes❌ No
Assigned viaProfile/Permission SetProfile + Record Type combination
Say This in Interview
"Record Types segment picklist values and business processes — an Opportunity might have 'New Business' and 'Renewal' record types with different stages. Page Layouts control which fields appear and which are required — assigned to profiles per record type. So a Sales Rep sees different fields than a Sales Manager for the same Renewal opportunity."
🤖
Agentforce, AI & Advanced Platform Features
Autonomous agents, Einstein, Prompt Builder, and AI architecture in 2026
Q126–Q138
Q126What is Agentforce and how does it differ from Einstein Copilot? Advanced
🤖Agentforce is Salesforce's platform for building autonomous AI agents that take multi-step actions on behalf of users — creating records, making decisions, and completing tasks. Einstein Copilot was the conversational AI assistant embedded in Salesforce UI. In 2026, Einstein Copilot has been rebranded as the Agentforce platform with expanded autonomous capabilities.
FeatureEinstein Copilot (Legacy)Agentforce (2026)
Interaction typeConversational assistantAutonomous multi-step actions
ActionsGuided, user approves each stepCan act independently
ChannelsSalesforce UI onlyUI, chat, email, API, voice
Data groundingData Cloud + org dataData Cloud + org data + external
Custom actionsLimited✅ Full via Apex, Flow, APIs
Say This in Interview
"Agentforce builds autonomous agents that complete multi-step tasks independently — not just answer questions. A customer service agent can read a case, query order history, identify the issue, draft a resolution, and update the record without human intervention at each step. Einstein Copilot was the predecessor; Agentforce is the evolved autonomous platform."
Q127What is Prompt Builder in Salesforce and how do you create a custom prompt template? Advanced
✍️Prompt Builder is a Salesforce tool for creating reusable AI prompt templates grounded in Salesforce data. Templates can include merge fields from Salesforce records, combine structured data with instructions, and be invoked from Flow, Apex, or Agentforce actions — making AI output consistent and contextually accurate.
🏗️ Prompt Template Types
  • 1️⃣Field Generation Templates: Generate a field value using AI — e.g., summarise case notes into Resolution__c
  • 2️⃣Record Summary Templates: Generate a summary of a record and its related data
  • 3️⃣Sales Email Templates: Generate personalised outbound emails based on contact and opportunity data
🏭 Real World

We built a Prompt Template for generating order confirmation emails — the template merges {Order.Name}, {Order.Amount__c}, {Account.Name}, and {Opportunity.Product_List__c} into a structured prompt asking the LLM to write a professional confirmation email in our brand voice. The resulting email is generated in 2 seconds and requires minimal editing.

Say This in Interview
"Prompt Builder creates reusable prompt templates with Salesforce merge fields — the template grounds the AI in actual record data rather than generic training knowledge. I invoke templates from Flow for automated generation or from Agentforce actions for agent-driven scenarios. The merge field grounding is what makes outputs accurate and relevant to the specific record."
Q128How does Agentforce grounding work and why is it important? Advanced
🔗Grounding connects the AI model's responses to actual Salesforce data — preventing hallucinations and ensuring outputs reflect real business information. Agentforce grounds on Salesforce records, Data Cloud unified profiles, and knowledge articles rather than relying solely on the LLM's training data.
🔍 Grounding Sources in Agentforce
  • CRM Records: Account, Contact, Case, Opportunity — retrieved via SOQL at runtime
  • Data Cloud Profiles: Unified customer profiles combining data from multiple sources
  • Knowledge Articles: Company documentation, product specs, support articles
  • Apex Actions: Real-time data retrieval from external systems mid-conversation
  • Without grounding: LLM uses only training data — generates plausible but potentially wrong answers about your business
Say This in Interview
"Grounding anchors AI responses to real Salesforce data — the agent queries CRM records, Data Cloud profiles, and knowledge articles before generating a response. Without grounding, an LLM gives confident generic answers that may not reflect your actual customer data or business rules. Grounding is what makes Agentforce enterprise-safe."
Q129How do you prevent Agentforce from exposing sensitive data to users who shouldn't see it? Advanced
🔐Agentforce respects the running user's record access and FLS — it only retrieves and presents data the user can already see in Salesforce. Additional safeguards include topic configuration (limiting which records agents can access), data masking in prompt templates, and topic guardrails that prevent agents from discussing certain subjects.
✅ Security Layers for Agentforce
  • 1️⃣User Context: Agentforce runs in the logged-in user's context — inherits their sharing and FLS restrictions
  • 2️⃣Topic Restrictions: Define which records and objects an agent topic can access — limit scope
  • 3️⃣Prompt Guardrails: Instructions in system prompt to never reveal certain fields or data categories
  • 4️⃣Action permissions: Control which actions (Apex, Flows) agents can invoke
  • 5️⃣Audit logging: All agent conversations logged for security review
Say This in Interview
"Agentforce inherits the user's Salesforce sharing and FLS — it can't show data the user can't already access directly. Beyond that, topic configuration limits which objects an agent can interact with, system prompt guardrails add explicit data handling instructions, and all agent conversations are logged for security audit."
Q130What is Einstein Prediction Builder and when would you use it over a custom ML model? Advanced
🔮Einstein Prediction Builder is a no-code tool for building binary classification predictions (will this opportunity close? will this customer churn?) using Salesforce record data. Use it for standard binary predictions on Salesforce objects. Use custom ML models (via external APIs) for complex multi-class predictions, proprietary algorithms, or non-Salesforce data sources.
FactorPrediction BuilderCustom ML Model
SetupNo-code — admin configurableData science team required
Prediction typesBinary (Yes/No)Any — binary, multi-class, regression
Data sourceSalesforce records onlyAny — Salesforce + external
Explainability✅ Top factors shownDepends on model
Time to valueDaysWeeks to months
Say This in Interview
"Einstein Prediction Builder is the right tool when the prediction is binary and the training data is in Salesforce — configure it in days without data science expertise. I'd bring in a custom ML model when the prediction requires non-Salesforce data, multiple output classes, or a proprietary algorithm that Prediction Builder's automated approach can't replicate."
Q131What is the Einstein Next Best Action and how do you configure recommendations? Advanced
💡Einstein Next Best Action surfaces contextual recommendations to users on Lightning pages — which action to take, which offer to make, which script to follow. Recommendations are driven by Recommendation objects with strategies built in Strategy Builder (Flow-like tool) that evaluate eligibility rules and rank options.
🏗️ Configuration Steps
  • 1
    Create Recommendation records — each with Name, Description, and optional image
  • 2
    Build Recommendation Strategy in Strategy Builder — define eligibility rules (when to show), exclusions (when not to), and sorting (which to show first)
  • 3
    Add einstein:recommendation Aura component or Einstein Next Best Action LWC to Lightning record page
  • 4
    Configure action flows that execute when user accepts a recommendation
🏭 Real World

On our Account page, Next Best Action surfaces "Offer Extended Payment Terms" when a customer has been with us 5+ years and their last order was over INR 50 lakhs. The strategy evaluates Account tenure and recent order value, shows the recommendation to the sales rep, and launches an email Flow when they click Accept.

Say This in Interview
"Next Best Action uses Recommendation Strategies — built in Strategy Builder — to evaluate eligibility rules and surface ranked recommendations contextually on Lightning pages. When a user accepts a recommendation, it triggers a Flow or Quick Action to execute the suggested action. I use it for in-the-moment sales guidance without training reps on every scenario."
Q132What is the difference between a synchronous and asynchronous Agentforce action? Advanced
⏱️Synchronous Agentforce actions execute within the agent conversation turn and return results immediately — the agent waits for the result before continuing. Asynchronous actions are long-running tasks the agent initiates and moves on, checking status later or being notified on completion. Use sync for quick lookups and decisions, async for sending emails, creating records, or calling slow external APIs.
Action TypeExecutionUse When
SynchronousCompletes within turn — user waitsQuick lookups, data retrieval, decisions
AsynchronousAgent initiates, continues — notified laterEmail sending, record creation, slow APIs
Say This in Interview
"Synchronous actions block the conversation turn and return results immediately — appropriate for record lookups and quick decisions where the agent needs the result to plan its next step. Asynchronous actions let the agent initiate a task and move on — I use async for operations like sending emails or triggering integration workflows that don't need to complete before the agent responds."
Q133What is the Einstein Analytics (now Tableau CRM) platform and how does it differ from standard Salesforce reports? Advanced
📊Tableau CRM (formerly Einstein Analytics) is an advanced analytics platform that extracts Salesforce data into a columnar datastore optimised for fast aggregations on large datasets, AI-powered insights, and interactive dashboards. Unlike standard reports which query live Salesforce data, Tableau CRM operates on replicated data snapshots enabling sub-second queries on millions of records.
FactorStandard ReportsTableau CRM
Data freshnessReal-timeScheduled refresh (hourly/daily)
Query engineSalesforce databaseOptimised columnar datastore
Record limit2,000 (UI), more via APIMillions of rows
AI predictions❌ Basic✅ Einstein Discovery built-in
External data❌ Salesforce only✅ CSV, databases, APIs
License requiredStandard SalesforceTableau CRM add-on
Say This in Interview
"Standard reports query live Salesforce data — great for operational reporting on current records. Tableau CRM extracts data into a columnar engine for sub-second analytics on millions of rows, combining Salesforce and external data with built-in AI insight discovery. I recommend it when standard reports hit record limits or when the business needs predictive analytics beyond what Reports can provide."
Q134What is retrieval-augmented generation (RAG) and how does Salesforce implement it? Advanced
🔍Retrieval-Augmented Generation (RAG) enhances LLM responses by first retrieving relevant information from a knowledge base and injecting it into the prompt — grounding the AI's response in factual, current data rather than relying solely on training data. Salesforce implements RAG through Data Cloud vector embeddings and knowledge article indexing for Agentforce.
🔄 RAG Flow in Salesforce
  • 1
    User sends query to Agentforce agent
  • 2
    Query is converted to a vector embedding
  • 3
    Vector similarity search finds relevant Salesforce records and knowledge articles
  • 4
    Retrieved content is injected into the LLM prompt as context
  • 5
    LLM generates response grounded in the retrieved Salesforce-specific context
Say This in Interview
"RAG gives LLMs access to knowledge they weren't trained on — Salesforce implements it by vectorising CRM records and knowledge articles in Data Cloud, then retrieving semantically similar content at query time and injecting it into the prompt. This is what makes Agentforce know about your specific products, policies, and customer history rather than just generic CRM concepts."
Q135What are the key design principles for building reliable Agentforce agents in Production? Advanced
🏗️Reliable Agentforce agents require: clear topic boundaries (what the agent can and cannot do), well-tested actions with explicit error handling, human escalation paths when confidence is low, comprehensive audit logging, and gradual rollout starting with low-risk use cases before expanding to high-stakes decisions.
✅ Production Agentforce Principles
  • 1️⃣Narrow scope: One agent for one domain — customer service agent, order agent, support agent
  • 2️⃣Explicit guardrails: Clear instructions on what the agent cannot do — never process refunds over $X without human
  • 3️⃣Human escalation: Every agent must have a clear path to escalate to a human when stuck or uncertain
  • 4️⃣Test with adversarial inputs: Try to break the agent with edge cases before production
  • 5️⃣Monitor conversation quality: Review agent conversations regularly — add topics or guardrails when failure patterns emerge
Say This in Interview
"Reliable Agentforce production design starts with narrow scope — one agent doing one domain well rather than one agent doing everything badly. Explicit guardrails define what the agent cannot do autonomously, every decision path has a human escalation option, and I review agent conversation logs weekly to identify failure patterns and tighten the design."
Q136What is Apex Mutation Testing and why does it matter for test quality? Advanced
🧬Mutation testing automatically introduces small code changes (mutations) into your Apex — like changing > to >= or flipping a boolean — and checks if your tests catch them. If a test still passes after a mutation, the test wasn't actually verifying that behaviour. High coverage % doesn't mean your tests are meaningful — mutation testing proves they are.
Mutation TypeExampleTest Should Fail If...
Conditional boundaryChange > to >=Boundary value matters
Boolean negationFlip true to falseBoolean logic is tested
Return valueReturn null instead of resultReturn value is asserted
DML removalRemove insert statementDML side-effect is verified
Say This in Interview
"100% code coverage can mask completely ineffective tests — if you remove every System.assert, coverage stays 100% but tests verify nothing. Mutation testing proves tests are actually checking behaviour by introducing bugs and confirming tests fail. It's the only reliable measure of test quality beyond coverage percentage."
Q137How do you test Apex code that relies on UserInfo values like locale, timezone, or profile? Advanced
👤UserInfo values reflect the running user in test context. To test logic that branches on user profile or permission, create test users with specific profiles using User records in @TestSetup and run test logic using System.runAs(testUser). This simulates different user contexts without changing the actual running user.
@isTest static void testAdminBehaviour() { // Create a user with specific profile for testing Profile p = [SELECT Id FROM Profile WHERE Name='System Administrator' LIMIT 1]; User adminUser = new User( Alias='tadmin', Email='testadmin@test.com', Username='testadmin@test.com.sandbox', ProfileId=p.Id, TimeZoneSidKey='America/New_York', LocaleSidKey='en_US', LanguageLocaleKey='en_US', EmailEncodingKey='UTF-8', LastName='Test' ); insert adminUser; System.runAs(adminUser) { // Code inside here runs as adminUser // UserInfo.getUserId() returns adminUser.Id // UserInfo.getProfileId() returns admin profile Id OrderService.processOrder(orderId); System.assertEquals('Approved', [SELECT Status__c FROM Order__c WHERE Id=:orderId].Status__c); } }
Say This in Interview
"System.runAs() lets me test code that branches on user profile, permissions, or locale by creating a test user with specific attributes and running the logic as that user. It's the only reliable way to test user-context-dependent Apex without using SeeAllData=true or relying on the test runner's user profile."
Q138What is a blue-green deployment strategy and can it be applied to Salesforce? Advanced
🔵🟢Blue-green deployment runs two identical Production environments — one live (blue), one staging the new release (green). After testing, traffic switches from blue to green instantly. In Salesforce, full blue-green isn't possible due to the single-org model, but sandboxes + feature flags + phased feature activation approximate the pattern.
✅ Salesforce Approximation
  • 1️⃣Full Sandbox: Acts as "green" — full copy of Production for final testing before deployment
  • 2️⃣Feature flags via Custom Metadata: Deploy code to Production but keep it inactive until a flag is enabled
  • 3️⃣Permission-based activation: New feature only visible to pilot users — enable for all when confident
  • 4️⃣Quick Deploy: Validates against Production, then deploys instantly — minimises live downtime window
Say This in Interview
"True blue-green isn't possible in Salesforce's single-org model, but I approximate it with feature flags via Custom Metadata — deploy code to Production but gate it behind a flag that only enables it for pilot users. Once validated, I flip the flag for everyone. Quick Deploy minimises the deployment window by pre-validating in a separate step."
Q139What are Scratch Orgs and how do they differ from Sandboxes? Medium
🆕Scratch Orgs are temporary, source-driven Salesforce environments created from a definition file — not copied from Production. They have no data, are fully configurable via code, and expire after 1-30 days. Sandboxes are copies of Production (data + metadata) that persist indefinitely. Scratch Orgs are for development; Sandboxes are for testing and staging.
FeatureScratch OrgSandbox
SourceDefinition file (code-driven)Copy of Production
DataEmpty (test data added via scripts)Partial or full copy of Production data
Lifespan1-30 days (expires)Persistent until deleted
Configuration100% via scratch-def.jsonManual or change set
Best forFeature development, unit testingUAT, staging, integration testing
Org limitLimited per Dev HubLimited by edition
Say This in Interview
"Scratch Orgs are code-created, disposable environments — I spin one up from a definition file, develop the feature, test it, and discard it. No manual setup, no contamination from previous changes. Sandboxes are Production copies for UAT and staging where real business users test with familiar data before go-live."
Q140What are Unlocked Packages and how do they enable modular Salesforce development? Advanced
📦Unlocked Packages are 2nd Generation Packages (2GP) that bundle metadata components into independently versioned, deployable units. Unlike unmanaged code, each package has a version history and dependency graph. Orgs can install specific package versions. This enables multiple teams to develop independently and release on their own schedules.
FeatureUnmanaged (Org-Based)Unlocked Package
Versioning❌ No versions✅ Semantic versioning
Dependency tracking❌ Manual✅ Declared in sfdx-project.json
Independent release❌ Everything together✅ Package by package
NamespaceOptionalOptional
Subscriber can modifyN/A✅ Yes (unlike managed packages)
Say This in Interview
"Unlocked Packages let you modularise a Salesforce org — the Sales team owns their package, Service team owns theirs, and each can version and release independently without deploying the entire org. Dependencies between packages are declared explicitly, preventing circular dependencies and making the architecture explicit and auditable."
Q141What is the Command Query Responsibility Segregation (CQRS) pattern in Salesforce Apex? Advanced
CQRS separates read (query) operations from write (command) operations into distinct classes or services. In Apex, Commands handle business logic and DML; Queries handle SOQL and data retrieval. This separation makes each class smaller, more testable, and lets reads and writes scale independently.
// COMMAND — handles business logic and DML only public class ApproveOrderCommand { private Id orderId; public ApproveOrderCommand(Id orderId) { this.orderId = orderId; } public void execute() { Order__c o = new Order__c(Id=orderId, Status__c='Approved'); update o; EventBus.publish(new Order_Approved__e(Order_Id__c=orderId)); } } // QUERY — handles SOQL and data retrieval only public class OrderQuery { public static List<Order__c> getPendingOrders() { return [SELECT Id, Name, Amount__c FROM Order__c WHERE Status__c='Pending']; } }
Say This in Interview
"CQRS in Apex separates Command classes (business logic + DML) from Query classes (SOQL only) — each does one thing and is independently testable. Commands don't return data, Queries don't modify data. This prevents the common anti-pattern of god-class service methods that mix complex queries with multi-step DML operations."
Q142How do you implement a dynamic form in LWC that renders fields based on metadata configuration? Advanced
🔄Store form field configuration in Custom Metadata (field name, type, label, required, order) and retrieve it via Apex. In LWC, iterate over the config and render the appropriate input component for each field type using dynamic component creation or conditional rendering with lwc:if.
<!-- Dynamic form template --> <template for:each={fields} for:item="field" for:key="apiName"> <template lwc:if={field.isText}> <lightning-input key={field.apiName} label={field.label} value={field.value} required={field.required} onchange={handleChange} data-field={field.apiName}> </lightning-input> </template> <template lwc:elseif={field.isPicklist}> <lightning-combobox key={field.apiName} label={field.label} options={field.options} value={field.value} data-field={field.apiName} onchange={handleChange}> </lightning-combobox> </template> </template>
Say This in Interview
"I build dynamic forms by storing field configuration in Custom Metadata — API name, type, label, required flag, and display order. The LWC retrieves this config via Apex and uses lwc:if chains to render the correct input component per field type. Admins add or reorder form fields by editing Custom Metadata — no code deployment needed."
Q143What is the Trigger Framework pattern and why is it recommended over individual trigger logic? Advanced
🏗️A Trigger Framework standardises trigger structure — one trigger per object that delegates to a handler class. The framework manages recursion prevention, context routing (before/after, insert/update/delete), and optionally enables/disables triggers via Custom Metadata. It eliminates duplicated boilerplate and makes trigger logic testable independently.
// Single trigger — delegates to handler trigger OrderTrigger on Order__c (before insert, before update, after insert, after update) { TriggerDispatcher.run(new OrderTriggerHandler()); } // Framework dispatcher manages context routing public class TriggerDispatcher { public static void run(ITriggerHandler handler) { if(Trigger.isBefore && Trigger.isInsert) handler.beforeInsert(Trigger.new); if(Trigger.isAfter && Trigger.isInsert) handler.afterInsert(Trigger.new, Trigger.newMap); if(Trigger.isBefore && Trigger.isUpdate) handler.beforeUpdate(Trigger.new, Trigger.oldMap); if(Trigger.isAfter && Trigger.isUpdate) handler.afterUpdate(Trigger.new, Trigger.oldMap); } }
Say This in Interview
"A trigger framework enforces one trigger per object with logic entirely in a handler class — the trigger is just an entry point. This makes handler methods independently unit testable without DML overhead, enables Custom Metadata toggle switches to disable specific handlers in production without deployment, and eliminates the chaos of multiple triggers firing in unpredictable order."
Q144How do you design a bi-directional sync between Salesforce and an external system without infinite loops? Advanced
🔄Prevent infinite loops in bi-directional sync by using a sync source flag — a field that records which system last updated the record. When the trigger fires, check if the update came from the external system and skip the outbound sync if so.
// Order__c has: Sync_Source__c (Salesforce/External), Last_Synced__c trigger OrderSyncTrigger on Order__c (after update) { for(Order__c o : Trigger.new) { Order__c old = Trigger.oldMap.get(o.Id); // Only sync outbound if Salesforce user changed it // NOT if external system's inbound update triggered this if(o.Sync_Source__c != 'External' && fieldsChanged(o, old)) { BCIntegrationJob.syncOrder(o.Id); // Async callout } } } // Inbound webhook from external system sets Sync_Source__c = 'External' // This prevents Salesforce trigger from firing outbound sync back
Say This in Interview
"Bi-directional sync loops are prevented with a Sync_Source__c flag — when the inbound integration updates a Salesforce record, it sets the flag to 'External'. The trigger sees this and skips the outbound sync. When a Salesforce user makes a change, the flag is 'Salesforce' and the outbound sync fires normally."
Q145What is Salesforce's multitenancy architecture and how does it affect governor limits? Advanced
🏢Salesforce runs thousands of customer orgs on shared infrastructure — a multitenant model. Every org shares the same servers, database, and application layer. Governor limits exist specifically to prevent any single org from monopolising shared resources and degrading other customers' performance. They're not technical limitations — they're fairness constraints.
💡 Why Governor Limits Exist
  • CPU time (10s sync): Prevents one org's complex code from hogging shared server CPU
  • SOQL limit (100 queries): Prevents one org from flooding the shared database with queries
  • Heap (6MB sync): Prevents one org from consuming all available server memory
  • DML (150 statements): Prevents one org from locking database rows excessively
Say This in Interview
"Governor limits are the price of multitenancy — Salesforce runs thousands of orgs on shared infrastructure, so limits protect every customer from resource monopolisation by any single org. Understanding this framing helps me explain to developers why you can't just 'turn off' governor limits — they're architectural fairness guarantees, not arbitrary restrictions."
Q146What are Salesforce Big Objects and when do you use them instead of standard custom objects? Advanced
🗄️Big Objects store massive amounts of historical data (billions of records) that don't need real-time access. They use a different storage architecture — no SOQL governor limits apply, but they're only queryable via SOQL with equality filters on indexed fields and don't support standard reporting.
FeatureCustom ObjectBig Object
Record capacityMillions (storage limited)Billions
SOQL supportFull SOQLEquality filters on indexed fields only
Triggers✅ Yes❌ No
Standard reports✅ Yes❌ No
Delete support✅ Yes❌ No (append-only)
Best forOperational dataAudit logs, historical archives
Say This in Interview
"Big Objects are for long-term archival of data at a scale custom objects can't support — audit trails, event logs, historical transaction records going back years. They're append-only, don't support triggers or standard reports, and query only via indexed equality filters. I use them for compliance-driven historical data that needs to be stored but rarely queried in complex ways."
Q147How do you implement a test data factory pattern in Apex? Medium
🏭A Test Data Factory is a utility class that creates test records with sensible defaults. Test methods call factory methods with only the fields they care about — all other required fields are defaulted by the factory. This eliminates duplicated test record creation code and makes tests resilient to new required field additions.
@isTest public class TestDataFactory { public static Account createAccount(Map<String,Object> overrides) { Account acc = new Account( Name = 'Test Account', Industry = 'Technology', Type = 'Customer', BillingCountry = 'India' ); // Apply overrides — test only sets what it cares about for(String field : overrides.keySet()) { acc.put(field, overrides.get(field)); } return acc; } public static List<Order__c> createOrders(Id accId, Integer count) { List<Order__c> orders = new List<Order__c>(); for(Integer i = 0; i < count; i++) { orders.add(new Order__c(Account__c=accId, Amount__c=1000*i, Status__c='Draft')); } return orders; } } // Test only specifies what it needs Account acc = TestDataFactory.createAccount(new Map<String,Object>{'Name'=>'XYZ Corp'});
Say This in Interview
"Test Data Factory centralises test record creation with sensible defaults — tests only specify fields relevant to what they're testing. When a new required field is added to the object, I update the factory once and all 50+ tests that use it continue working without modification."
Q148What is destructiveChanges.xml and when do you need it in a deployment? Medium
🗑️destructiveChanges.xml lists metadata components to DELETE from the target org during deployment. It's required because the normal package.xml only specifies what to deploy/update — it cannot delete existing components. Use it to remove unused Apex classes, fields, objects, or flows from Production.
<!-- destructiveChanges.xml — delete these components from target org --> <?xml version="1.0" encoding="UTF-8"?> <Package xmlns="http://soap.sforce.com/2006/04/metadata"> <types> <members>OldOrderService</members> <members>LegacyBatchJob</members> <name>ApexClass</name> </types> <types> <members>Account.Old_Field__c</members> <name>CustomField</name> </types> <version>60.0</version> </Package>
⚠️ Important Notes
  • ⚠️Pre vs Post destructive: destructiveChangesPre.xml runs before deployment; destructiveChangesPost.xml runs after — use Post for components referenced by what you're deploying
  • ⚠️Cannot delete via Change Sets: Change Sets can't delete components — destructiveChanges.xml requires SFDX or Metadata API
Say This in Interview
"destructiveChanges.xml is the only way to delete metadata components from Production via deployment — regular package.xml can't delete, only deploy. I use destructiveChangesPost.xml to remove legacy components after deploying their replacements, ensuring the new code deploys successfully before the old is deleted."
Q149How do you troubleshoot Apex test failures that only occur in Production deployment validation? Advanced
🔍Tests failing in Production but passing in Sandbox usually indicate: hardcoded IDs from Sandbox, dependency on existing data (SeeAllData), tests using org-specific configuration that differs in Production, or Production having different settings/profiles. Diagnose by reviewing the specific failure message and matching it to these patterns.
✅ Common Causes & Fixes
  • 1️⃣Hardcoded IDs: RecordType, Profile, or Queue IDs hardcoded in tests — always query by Name/DeveloperName instead
  • 2️⃣SeeAllData dependency: Test relies on Production data structure — remove SeeAllData and create all required data in @TestSetup
  • 3️⃣Different validation rules: Production has a validation rule Sandbox doesn't — test data must satisfy Production's validation rules
  • 4️⃣Custom Metadata differences: Test logic branches on CMT values that differ between orgs — use Test.isRunningTest() or inject CMT in test context
  • 5️⃣User Permission differences: Test users in Sandbox have permissions Production users don't — use System.runAs() with explicitly defined permissions
Say This in Interview
"Production-only test failures are almost always environment assumptions — hardcoded IDs, SeeAllData dependencies, or Production having more restrictive validation rules than Sandbox. I fix them by never hardcoding org-specific IDs, never using SeeAllData, creating all test data explicitly, and running tests in a Full Sandbox that mirrors Production configuration before deploying."
Q150How do you approach a Salesforce org audit — what do you check and in what order? Advanced
🔍A Salesforce org audit systematically reviews Security, Automation Health, Data Quality, Code Quality, Integration Reliability, and Performance. Start with Security (highest risk) and work outward. Document findings with severity ratings and a remediation roadmap.
📋 Audit Checklist — Priority Order
  • 1
    Security: OWD settings, Profile vs Permission Set model, MFA enforcement, Connected App permissions, Named Credential security
  • 2
    Automation health: Active flows count, flow errors in last 30 days, Workflow Rules still active (should be migrated), Process Builder usage, trigger recursion risks
  • 3
    Code quality: Apex test coverage per class (not just org-wide), classes with 0% coverage, classes using SeeAllData=true, deprecated APIs used
  • 4
    Data quality: Duplicate records count, missing required field % per object, records without owner, stale records not updated in 2+ years
  • 5
    Integration health: Failed integration records in last 30 days, callout error rate, Named Credential expiry dates
  • 6
    Performance: Slow SOQL queries in Event Monitoring, Apex CPU usage trends, batch job failure rates
🏭 Real World — XYZ Company

I conducted a full org audit covering 6 areas — found 3 critical security gaps (profiles with Modify All Data unnecessarily), 12 Workflow Rules still active that should have been migrated to Flow, 4 Apex classes with 0% test coverage, and our BC integration had a 15% failure rate with no alerting. The audit report became our 6-month technical debt roadmap.

Say This in Interview
"I audit in security-first order — vulnerabilities have immediate business impact. Then automation health, because broken flows affect users daily. Code quality, data quality, integration health, and performance follow. The output is a severity-rated finding list with a prioritised remediation roadmap, not just a list of problems."