๐Ÿ  Home ๐Ÿ”’ Record Sharing ⚙ Apex Triggers ๐Ÿ” SOQL ๐Ÿ’ป LWC ๐Ÿ”— Integration ๐Ÿค– Flows & Automation ๐Ÿค– Agentforce & AI ☁ Data Cloud ๐ŸŽ“ DC Course — Free ๐Ÿ’ต CPQ ๐ŸŽฏ 100 Scenario Questions ๐Ÿ† 150 Advanced Questions ๐Ÿ“ง Marketing Cloud ๐Ÿ‘ฅ About Us Start Learning Free →

100 Salesforce Scenario-Based Interview Questions 2026 | SF Interview Pro

100 Salesforce Scenario-Based Interview Questions 2026 | SF Interview Pro
๐ŸŽฏ 2026 Edition — Most Searched

100 Salesforce Scenario-Based
Interview Questions

Real scenarios. Real answers. Covering every topic interviewers ask in 2026 — from Agentforce to DevOps, Apex to Data Cloud.

100
Questions
10
Topics
2026
Updated
Free
No Paywall
๐Ÿค–
Agentforce & AI
Hottest topic in 2026 — asked in 80% of senior Salesforce interviews
Q1–Q12
Q1When would you use Agentforce instead of custom Apex automation? Advanced
๐Ÿค–Use Agentforce for conversational, multi-step, context-dependent interactions that require natural language understanding. Use Apex for deterministic, high-volume, auditable business logic with strict error-handling requirements.
๐Ÿ“Š Decision Framework
Use Agentforce WhenUse Apex When
User asks questions in natural languageProcessing 10,000+ records in bulk
Multi-step conversation neededComplex calculations with strict logic
Context must carry across turnsIntegration callouts with retries
AI needs to decide next actionAudit trail and deterministic output needed
๐Ÿญ Real World — XYZ Company

For XYZ Company's order management, I use Apex for freight charge calculation (deterministic, must be exact every time). But if we were to let sales reps ask "What's the best pricing for this customer?", Agentforce would be the right tool — it can pull order history, suggest a price, and explain its reasoning in natural language.

Say This in Interview
"Agentforce handles conversational, context-aware workflows where the path isn't predetermined — Apex handles deterministic, high-volume logic where every step, error, and outcome must be exactly controlled."
Q2A customer wants AI to auto-respond to service cases — how do you implement this with Agentforce? Advanced
๐Ÿ’ฌBuild an Agentforce Service Agent using Prompt Builder with grounding on your Knowledge Base, configure Case topic classification, and set up human handoff rules for escalation beyond the agent's confidence threshold.
✅ Implementation Steps
  • 1
    Enable Agentforce in Setup → Einstein → Agents → Create Service Agent
  • 2
    Connect Knowledge Articles as grounding data source
  • 3
    Configure Topics — what questions the agent handles (e.g. Order Status, Refunds, Technical Issues)
  • 4
    Define Actions — what the agent can DO (update case status, send email, create order)
  • 5
    Set confidence threshold — below 70% → escalate to human agent
  • 6
    Test with Einstein Copilot preview, deploy to Experience Cloud or embedded chat
Say This in Interview
"I'd build an Agentforce Service Agent grounded on our Knowledge Base, configure Topics for what it handles, define Actions for what it can do, and set a confidence threshold below which it escalates to a human — keeping AI in the loop without replacing human judgment."
Q3How do you prevent Agentforce from exposing sensitive customer data? Advanced
๐Ÿ”’Agentforce respects Salesforce sharing rules and profiles by default — but you must explicitly configure data masking, limit grounding sources, and audit AI responses regularly to prevent data leakage.
✅ Security Measures
  • Sharing rules enforced: Agentforce only accesses records the running user has permission to see — not a superuser
  • Field-level security: FLS is respected — masked fields in profiles won't be shown to agents
  • Data masking: Use Shield Platform Encryption for SSN, payment details before grounding
  • Limit grounding sources: Only ground on approved Knowledge Articles, not all Salesforce data
  • Audit logs: Enable Einstein Activity Capture audit to monitor what data was accessed
  • Never: Ground Agentforce on unmasked financial, health, or PII data without encryption
Say This in Interview
"Agentforce inherits Salesforce's sharing model — but I'd also configure data masking via Shield, limit grounding to approved sources only, and set up audit logging to monitor every AI interaction for compliance."
Q4Your Agentforce agent is giving wrong or hallucinated responses — how do you troubleshoot? Advanced
๐ŸšจHallucinations in Agentforce usually mean either poor grounding data, weak prompt instructions, or the agent is operating outside its defined Topics. Fix the root cause — not the symptom.
๐Ÿ” Troubleshooting Checklist
  • 1️⃣Check grounding data quality — Is the Knowledge Base outdated? Are articles contradicting each other?
  • 2️⃣Review Topic definition — Is the agent being asked questions outside its configured scope?
  • 3️⃣Inspect Prompt Builder instructions — Vague prompts = vague answers. Be more explicit about tone, format, and what NOT to say
  • 4️⃣Add negative examples — Tell the agent "If you don't know, say you don't know — never guess"
  • 5️⃣Lower confidence threshold — Escalate more cases to humans until grounding improves
Say This in Interview
"Hallucinations usually mean the grounding data is poor or the Topic scope is too broad — I'd audit the Knowledge Base quality, tighten the Prompt Builder instructions with negative examples, and lower the confidence threshold until the grounding data improves."
Q5How would you implement Agentforce + Data Cloud for a 360-degree customer view? Advanced
☁️Data Cloud unifies all customer data into a Single Unified Profile — Agentforce is then grounded on that profile, giving it real-time, complete context about every customer before responding.
๐Ÿ”„ Architecture Flow
LayerWhat It DoesTool
Data IngestionPull data from CRM, ERP, web, appData Cloud Connectors
UnificationMatch records into Single ProfileIdentity Resolution
EnrichmentAdd segments, insights, scoresData Cloud Segments
GroundingFeed unified profile to AgentforceEinstein Data Library
ResponseAgent answers with full customer contextAgentforce Agent
Say This in Interview
"Data Cloud creates the unified customer profile from all data sources — Agentforce is then grounded on that profile, so when a customer contacts support, the agent already knows their full history, behaviour, and value before saying hello."
Q6How do you test an Agentforce agent before deploying to Production? Medium
๐ŸงชUse Einstein Copilot's built-in preview mode to test conversational scenarios, create test Topic conversations covering happy path and edge cases, and validate with real users in UAT before go-live.
✅ Testing Steps
  • 1
    Use Agentforce preview panel in Setup to simulate real conversations
  • 2
    Test all defined Topics — both expected questions and out-of-scope questions
  • 3
    Test edge cases: empty data, conflicting Knowledge Articles, ambiguous questions
  • 4
    Validate human handoff — confirm escalation triggers work correctly
  • 5
    UAT with real business users — track satisfaction score before go-live
  • 6
    Monitor Einstein Activity audit logs for first 2 weeks post-launch
Say This in Interview
"I test Agentforce using the built-in preview mode covering all topics including edge cases, validate human handoff triggers, run UAT with actual users, and monitor audit logs for the first two weeks in Production."
Q7Client wants AI to predict the Next Best Action for sales reps — how do you build this? Advanced
๐ŸŽฏUse Einstein Next Best Action with Recommendation Strategies built in Flow, grounded on Data Cloud segments and Opportunity history, surfaced to reps via an LWC component on the Opportunity record page.
✅ Implementation Approach
  • Einstein NBA: Setup → Einstein → Next Best Action — create Recommendations (the action cards)
  • Strategy Builder: Create Flow-based strategies that filter and rank recommendations based on Opportunity stage, amount, industry
  • Data Cloud segments: Feed customer behaviour data as additional signals
  • Surface in LWC: Use force:recommendationChanged or Einstein NBA component on record page
Say This in Interview
"I'd use Einstein Next Best Action with Strategy Builder in Flow to rank recommendations based on Opportunity data and Data Cloud customer segments — displayed to reps via an LWC component on the Opportunity page."
Q8How do you ensure Agentforce respects Salesforce sharing rules? Medium
Agentforce automatically runs in the context of the logged-in user — it only accesses records that user has permission to see. Sharing rules, OWD, profiles, and FLS are all enforced automatically.
✅ What's Enforced Automatically
  • OWD and sharing rules — agent only sees records the user can see
  • Field-Level Security — hidden fields stay hidden even from AI responses
  • Profile object permissions — agent cannot access objects the user's profile restricts
  • ⚠️Grounding data — you must control what external data sources are connected; Salesforce security doesn't apply to external APIs
Say This in Interview
"Agentforce runs as the logged-in user context — OWD, sharing rules, FLS, and profiles are all enforced automatically. The only area to watch is external grounding sources, where Salesforce security doesn't automatically apply."
Q9What is Prompt Builder and when would you use it? Medium
๐Ÿ“Prompt Builder is Salesforce's no-code tool for creating, testing, and managing AI prompts. Use it to build Sales Email prompts, Case Summary prompts, or any LLM interaction that needs to be grounded on Salesforce data without writing Apex.
๐Ÿ“Š Prompt Template Types
Template TypeUse CaseOutput
Sales EmailGenerate personalised outreach emailsDraft email for rep to review
Field GenerationAuto-populate a field with AIFilled field value
FlexAny custom AI interactionFree-form AI response
Record SummarySummarise a record for contextText summary
Say This in Interview
"Prompt Builder is Salesforce's no-code prompt management tool — I'd use it to create grounded AI prompts for sales emails, case summaries, and field auto-population without writing Apex, while keeping prompt logic version-controlled and testable."
Q10How do you handle a situation where Agentforce is operating outside its approved scope? Advanced
⚠️If Agentforce is answering questions outside its configured Topics, tighten the Topic instructions, add explicit "out of scope" responses, and lower the confidence threshold for human handoff.
✅ Fixes
  • 1️⃣Topic refinement: Make Topic descriptions very specific — "Handle only order status questions, not pricing or legal queries"
  • 2️⃣Add a fallback Topic: Create a catch-all Topic that says "For anything outside my scope, I'll connect you to a human"
  • 3️⃣Prompt guardrails: In Prompt Builder, explicitly say "Do NOT answer questions about X, Y, Z"
  • 4️⃣Monitor conversations: Use Einstein Activity audit to catch out-of-scope interactions
Say This in Interview
"I'd tighten Topic descriptions to be very specific, add a fallback 'out of scope' Topic that escalates to humans, and add explicit guardrails in Prompt Builder — then monitor audit logs to catch any remaining boundary violations."
Q11Client wants real-time AI recommendations during a call — what do you build? Advanced
๐Ÿ“žBuild an Agentforce agent embedded in Service Cloud Voice with real-time transcript analysis — the agent reads the live call transcript and surfaces Next Best Action recommendations to the agent's screen in real time.
✅ Architecture
  • Service Cloud Voice: Captures live call transcript via Amazon Connect or telephony partner
  • Einstein Conversation Insights: Analyses transcript in real time, detects intent and sentiment
  • Einstein NBA: Pushes relevant recommendations to agent's console based on transcript context
  • LWC component: Displays recommendations as actionable cards on the console
Say This in Interview
"I'd use Service Cloud Voice with Einstein Conversation Insights to analyse the live transcript, and Einstein Next Best Action to push real-time recommendations to the agent console — displayed via an LWC component alongside the call."
Q12How do you measure the success/ROI of an Agentforce implementation? Medium
๐Ÿ“ŠMeasure Agentforce ROI through containment rate (% cases resolved without human), average handle time reduction, CSAT scores, and cost-per-interaction compared to human-only baseline.
๐Ÿ“Š Key Metrics
MetricWhat It MeasuresTarget
Containment Rate% cases AI resolved without human> 60%
Avg Handle TimeTime to resolve a case30%+ reduction
CSAT ScoreCustomer satisfaction with AI responses> 80%
Cost per InteractionAI vs human agent cost60%+ savings
Escalation Rate% handed off to humansDecreasing over time
Say This in Interview
"I measure Agentforce ROI through containment rate, average handle time reduction, CSAT, and cost-per-interaction — a 60%+ containment rate with maintained CSAT typically indicates a successful implementation."
Apex & Governor Limits
Asked in every Salesforce developer interview — know every scenario cold
Q13–Q27
Q13Your batch job is failing in Production — what are your exact steps? Medium
๐ŸšจNever guess in Production. Diagnose first — read the error, reproduce in sandbox, fix, test, then deploy. Never make blind fixes directly in Production.
๐Ÿ” Step-by-Step
  • 1
    Setup → Apex Jobs → find the failed job → read the error message and exception type
  • 2
    Check debug logs: Setup → Debug Logs → run the same batch in sandbox with matching data
  • 3
    Identify root cause: Governor limit hit? Null pointer? Data issue? SOQL error?
  • 4
    Fix in sandbox → write/update test class → confirm all tests pass with 100% batch coverage
  • 5
    Deploy fix via Change Set → validate → Quick Deploy
  • 6
    Re-run the batch for failed records — use Database.QueryLocator scope to target failed records only
๐Ÿญ XYZ Company

Our nightly exchange rate Queueable Apex failed once in Production due to a null ExchangeRate__c field when IndiaMART returned an unexpected response. I read the error in Apex Jobs, reproduced it with test data in sandbox, added null check handling, updated the test class, and deployed the fix within 2 hours.

Say This in Interview
"I read the error from Setup → Apex Jobs, reproduce in sandbox, fix the root cause, update test classes, deploy via Change Set, and re-run the batch targeting only the failed records — never make blind fixes in Production."
Q14A trigger is causing infinite recursion — how do you fix it? Easy
๐Ÿ”„Use a static boolean variable in a helper class to track whether the trigger has already run in the current transaction. Set it to true on first run, check it before executing on subsequent calls.
๐Ÿ“ The Fix — Static Variable Pattern
public class TriggerHelper { public static Boolean hasRun = false; } // In your trigger: trigger OrderTrigger on Order (before update) { if (!TriggerHelper.hasRun) { TriggerHelper.hasRun = true; // your logic here } }
⚠️ Why This Works
  • Static variables persist for the entire transaction — not reset between trigger calls
  • When trigger fires again (due to update in trigger body), hasRun is already true → skips execution
  • Don't use instance variables — they reset on each trigger call
Say This in Interview
"I use a static boolean in a handler class — set it to true on first execution, check it before running on subsequent calls. Since static variables persist for the full transaction, this prevents recursion without affecting other transactions."
Q15How do you bulkify a trigger that's hitting the 101 SOQL limit? Easy
๐ŸšซSOQL inside a for loop is the most common Salesforce anti-pattern. Always collect IDs first, query once outside the loop, then process results using a Map for O(1) lookup.
๐Ÿ“ Wrong vs Right
// ❌ WRONG — SOQL in loop for(Order o : Trigger.new) { Account a = [SELECT Id, Name FROM Account WHERE Id = :o.AccountId]; } // ✅ CORRECT — Bulkified Set<Id> accIds = new Set<Id>(); for(Order o : Trigger.new) { accIds.add(o.AccountId); } Map<Id, Account> accMap = new Map<Id, Account>( [SELECT Id, Name FROM Account WHERE Id IN :accIds] ); for(Order o : Trigger.new) { Account a = accMap.get(o.AccountId); // O(1) lookup }
Say This in Interview
"I collect all IDs in a Set first, query once outside the loop storing results in a Map, then use Map.get() for O(1) lookup inside the loop — one SOQL query handles any number of records."
Q16When do you use Queueable vs Future vs Batch Apex? Medium
⚙️Each async pattern solves a different problem — choose based on: data volume, chaining needs, callout requirements, and monitoring needs.
๐Ÿ“Š Decision Table
PatternUse WhenLimitations
@FutureSimple callouts, small one-off async tasksNo monitoring, no chaining, no object params
QueueableComplex objects, chaining, monitoring neededMax 50 chained, no large data volumes
Batch ApexProcessing millions of recordsMax 5 concurrent, slower than Queueable
Scheduled ApexRun at specific time/intervalMax 100 scheduled jobs in org
๐Ÿญ XYZ Company

For the INR exchange rate conversion, I used Queueable Apex — it needed to make a callout to get the rate, process the Order record, and could be monitored in Apex Jobs. If I needed to process 100,000 orders nightly, I'd use Batch Apex instead. For a one-off email notification, @Future would be sufficient.

Say This in Interview
"Future for simple fire-and-forget callouts, Queueable when I need chaining, object parameters, or job monitoring, and Batch Apex for high-volume record processing that would exceed transaction limits."
Q17Your Apex class is hitting the heap size limit — how do you resolve? Advanced
๐Ÿ’พHeap size limit (6MB sync, 12MB async) is hit when you're storing too much data in memory. The fix is to process records in smaller batches and avoid holding large collections in memory simultaneously.
✅ Solutions
  • 1️⃣Switch to Batch Apex: Process records in chunks of 200 — each batch chunk gets a fresh heap
  • 2️⃣Avoid large collections: Don't store full sObject list if you only need specific fields — query only what you need
  • 3️⃣Nullify references: Set large collections to null after use — GC will reclaim the memory
  • 4️⃣Use Database.getQueryLocator: In Batch, this streams records rather than holding all in memory
  • 5️⃣Reduce SOQL fields: SELECT only required fields — avoid SELECT * equivalent patterns
Say This in Interview
"Heap limit means too much data in memory at once — I'd switch to Batch Apex to process in 200-record chunks, query only needed fields, nullify large collections after use, and use Database.getQueryLocator for streaming rather than holding all records in memory."
Q18How do you implement retry logic in Apex for failed API callouts? Advanced
๐Ÿ”Implement retry logic using Queueable Apex with a retry counter passed as a parameter. On failure, chain a new Queueable job with an incremented counter and delay until max retries is reached.
public class RetryCalloutJob implements Queueable, Database.AllowsCallouts { private Integer retryCount; private static final Integer MAX_RETRIES = 3; public RetryCalloutJob(Integer retryCount) { this.retryCount = retryCount; } public void execute(QueueableContext ctx) { try { // Make API callout HttpResponse res = makeCallout(); if(res.getStatusCode() != 200 && retryCount < MAX_RETRIES) { System.enqueueJob(new RetryCalloutJob(retryCount + 1)); } } catch(Exception e) { if(retryCount < MAX_RETRIES) { System.enqueueJob(new RetryCalloutJob(retryCount + 1)); } } } }
Say This in Interview
"I implement retry via Queueable chaining — pass a retry counter as a constructor parameter, increment it on failure, and enqueue a new job until max retries is reached. I also store failed records to a custom object for manual review after exhausting retries."
Q19Your test class is passing in sandbox but failing in Production — why and how to fix? Advanced
๐ŸงชThis almost always means the test is relying on org data using @isTest(seeAllData=true) or querying records that exist in sandbox but not in Production. Fix by creating all test data inside the test class.
๐Ÿ” Common Root Causes
  • seeAllData=true: Test queries org data that exists in sandbox but is different/missing in Production
  • Hardcoded IDs: Record IDs are different between sandbox and Production — always query for IDs, never hardcode
  • Org-specific data dependencies: Test expects specific Price Books, Products, or RecordTypes that exist in sandbox only
  • Fix: Remove seeAllData=true, create all data in @TestSetup, use @isTest annotation only, never hardcode org-specific IDs
Say This in Interview
"Tests failing in Production but passing in sandbox almost always means reliance on org data via seeAllData=true or hardcoded IDs — the fix is creating all test data inside the test class using @TestSetup so tests are fully environment-independent."
Q20Client needs to process 2 million records overnight — what Apex pattern do you choose? Medium
๐Ÿ“ฆUse Batch Apex with Database.Batchable — process in chunks of 200 records per execute() call. For 2 million records at 200 per batch, that's 10,000 batch executions which Salesforce handles automatically overnight.
✅ Batch Apex Structure
global class ProcessOrdersBatch implements Database.Batchable<sObject> { global Database.QueryLocator start(Database.BatchableContext bc) { return Database.getQueryLocator( 'SELECT Id, Status__c FROM Order__c WHERE Processed__c = false' ); } global void execute(Database.BatchableContext bc, List<Order__c> scope) { // Process each batch of 200 for(Order__c o : scope) { o.Processed__c = true; } update scope; } global void finish(Database.BatchableContext bc) { // Send completion email or chain next batch } } // Execute: Database.executeBatch(new ProcessOrdersBatch(), 200);
Say This in Interview
"For 2 million records I'd use Batch Apex with Database.getQueryLocator which streams records without holding all in memory, process in chunks of 200, and schedule it via Scheduled Apex to run overnight — 10,000 batches completing automatically with full governor limit reset between each."
Q21How do you prevent duplicate records via Apex when standard duplicate rules aren't enough? Medium
๐Ÿ”Use a before-insert/before-update trigger that queries for existing records matching your duplicate criteria and adds an error to the record if a match is found — before the record is saved.
trigger PreventDuplicateOrder on Order__c (before insert) { Set<String> orderNums = new Set<String>(); for(Order__c o : Trigger.new) orderNums.add(o.Order_Number__c); Map<String, Order__c> existing = new Map<String, Order__c>(); for(Order__c o : [SELECT Id, Order_Number__c FROM Order__c WHERE Order_Number__c IN :orderNums]) { existing.put(o.Order_Number__c, o); } for(Order__c o : Trigger.new) { if(existing.containsKey(o.Order_Number__c)) { o.addError('Order Number already exists: ' + o.Order_Number__c); } } }
Say This in Interview
"I use a before-insert trigger that queries existing records by the unique field, builds a Map, then checks each new record against it — calling addError() to block the save before it reaches the database, which is more reliable than after-insert rollback."
Q22How do you unit test an Apex class that makes HTTP callouts? Medium
๐ŸงชImplement the HttpCalloutMock interface and use Test.setMock() to intercept real HTTP calls during testing — Salesforce doesn't allow real callouts in test context, so mock responses are mandatory.
// 1. Create Mock class @isTest global class IndiaMART_MockResponse implements HttpCalloutMock { global HTTPResponse respond(HTTPRequest req) { HttpResponse res = new HttpResponse(); res.setStatusCode(200); res.setBody('{"leads":[{"name":"Test Lead"}]}'); return res; } } // 2. Use in test class @isTest static void testCallout() { Test.setMock(HttpCalloutMock.class, new IndiaMART_MockResponse()); Test.startTest(); IndiaMART_Handler.fetchLeads(); Test.stopTest(); // Assert results }
Say This in Interview
"I create a class implementing HttpCalloutMock that returns a controlled response, then use Test.setMock() to register it before calling the class under test — this intercepts the HTTP call and returns the mock response without making a real network request."
Q23How do you share code between two triggers on different objects? Medium
๐Ÿ”งCreate a shared Apex utility/helper class with static methods containing the reusable logic — both triggers call the same utility class methods rather than duplicating code.
✅ Best Practice — Trigger Handler Pattern
  • Keep triggers thin — one line that calls a handler class
  • Handler class contains all business logic — one method per event (beforeInsert, afterUpdate etc.)
  • Utility class contains shared methods — validation, formatting, lookup logic
  • Both OrderTrigger and QuoteTrigger can call the same AddressValidator.validate() method
Say This in Interview
"I follow the Trigger Handler Pattern — thin triggers that delegate to handler classes, with shared logic in utility classes containing static methods that any trigger or class can call — keeping code DRY and testable."
Q24Client wants an alert when an Opportunity crosses ₹1 crore — Apex or Flow? Easy
๐Ÿค”Use Flow first — this is exactly the use case Salesforce Flow was built for. Only use Apex if the notification needs complex logic like multi-currency conversion, dynamic recipients from related records, or retry on failure.
FactorUse FlowUse Apex Trigger
Simple field threshold alert✅ YesOverkill
Complex currency conversionLimited✅ Yes
Dynamic email recipients from related objectsPossible but complex✅ Easier
Bulk data import scenario⚠️ Can fail✅ Bulkified
Say This in Interview
"I always start with Flow for threshold-based notifications — it's faster to build, easier to maintain, and doesn't require deployment. I only move to Apex when the logic exceeds what Flow can handle cleanly, like multi-currency conversion or complex recipient logic."
Q25Your Apex trigger is slowing down page load — how do you optimise it? Advanced
Page slowness from triggers means synchronous heavy processing in a before/after trigger. Move heavy logic to async (Queueable/Future), reduce SOQL queries, and ensure no SOQL/DML in loops.
✅ Optimisation Checklist
  • 1️⃣Move to async: Any logic not needed immediately → Queueable Apex
  • 2️⃣Bulkify queries: Single SOQL with Map, not SOQL per record
  • 3️⃣Reduce field queries: SELECT only fields you actually use
  • 4️⃣Early exit: Check if relevant fields changed before running logic
  • 5️⃣Avoid DML in loops: Collect records in a list, do one DML at the end
Say This in Interview
"I'd profile the trigger using debug logs to find the bottleneck, move any non-critical synchronous logic to Queueable, bulkify all SOQL queries into Maps, add early-exit conditions when relevant fields haven't changed, and batch all DML operations to the end."
Q26How do you handle a scenario where two triggers on the same object conflict? Advanced
⚔️Salesforce doesn't guarantee trigger execution order when multiple triggers exist on the same object. The correct fix is to merge all logic into a single trigger using the Trigger Handler pattern.
✅ Best Practice
  • One trigger per object rule: One trigger, one handler class, ordered methods for each event
  • TriggerFramework: Use a framework like Kevin O'Hara's to manage execution order explicitly
  • Never rely on execution order across multiple triggers — Salesforce does not guarantee it
Say This in Interview
"Salesforce doesn't guarantee order between multiple triggers on the same object — the fix is consolidating all logic into a single trigger with a handler class where execution order is explicitly controlled."
Q27How do you handle governor limits when processing large data imports via Data Loader? Medium
๐Ÿ“คData Loader processes records in configurable batch sizes (default 200). Each batch is a separate transaction with fresh governor limits. Triggers and flows fire per batch — ensure they're bulkified to handle 200 records cleanly.
✅ Key Points
  • Each Data Loader batch = one transaction = fresh limits (100 SOQL, 150 DML etc.)
  • Default batch size 200 — reduce to 100 if hitting limits, increase to 2000 for better performance
  • Temporarily disable triggers/workflows if doing one-time large data migration
  • ⚠️Disabling triggers in prod is risky — always test impact first in sandbox
Say This in Interview
"Data Loader sends records in batches with fresh governor limits per batch — I'd ensure all triggers are bulkified for 200 records, adjust the batch size based on trigger complexity, and for one-time migrations, consider temporarily disabling non-critical automation."
๐Ÿ’ป
LWC Scenarios
Component communication, performance, debugging — the real interview questions
Q28–Q39
Q28Your LWC component is not loading data — how do you debug? Easy
๐Ÿ”Debug LWC data loading issues in layers — browser console first, then Apex logs, then network tab. Isolate whether the issue is in the component's JS, the Apex method, or the data itself.
๐Ÿ” Debugging Steps
  • 1
    Open browser DevTools (F12) → Console tab — look for JavaScript errors
  • 2
    Add console.log() statements in JS to check if wire/apex is returning data
  • 3
    Check Apex debug logs: Setup → Debug Logs → run action → check Apex execution
  • 4
    Verify @wire or @AuraEnabled method is returning correct data with SOQL test in Workbench
  • 5
    Check user's FLS/Profile — are they missing field permission that the Apex queries?
  • 6
    In Experience Cloud: check Guest User profile has object/field access
Say This in Interview
"I debug in layers — browser console for JS errors, console.log in the wire handler to check returned data, Apex debug logs to verify server-side execution, and Workbench SOQL to validate the query returns data for that user's permissions."
Q29How do you pass data between two unrelated LWC components? Medium
๐Ÿ“กUse Lightning Message Service (LMS) — define a Message Channel, publish from one component, subscribe in the other. LMS works across unrelated components, Aura, Visualforce, and LWC on the same page.
Communication PatternUse When
@api propertyParent → Child (direct)
Custom EventsChild → Parent (direct)
Lightning Message ServiceUnrelated components anywhere on page
Apex / Custom SettingsCross-page or cross-session data
// Publisher component import { publish, MessageContext } from 'lightning/messageService'; import ORDER_SELECTED from '@salesforce/messageChannel/OrderSelected__c'; publish(this.messageContext, ORDER_SELECTED, { orderId: this.selectedId }); // Subscriber component import { subscribe, MessageContext } from 'lightning/messageService'; subscribe(this.messageContext, ORDER_SELECTED, (msg) => { this.orderId = msg.orderId; });
Say This in Interview
"For unrelated components I use Lightning Message Service — define a Message Channel, publish from the sender with a payload, subscribe in the receiver. It works across LWC, Aura, and Visualforce components on the same Lightning page."
Q30How do you implement real-time data refresh in LWC without page reload? Medium
๐Ÿ”„Use EmpApi (Streaming API) to subscribe to Platform Events or CDC (Change Data Capture) — the component receives real-time notifications when data changes and updates its state without a page reload.
import { subscribe, onError } from 'lightning/empApi'; connectedCallback() { // Subscribe to Platform Event subscribe('/event/OrderStatusChange__e', -1, (event) => { this.orderStatus = event.data.payload.Status__c; }).then(sub => { this.subscription = sub; }); }
✅ Options for Real-Time
  • Platform Events + EmpApi: Best for custom real-time events (order status, stock updates)
  • Change Data Capture + EmpApi: Automatic events when any Salesforce record changes
  • refreshApex(): Re-runs a @wire query on demand — not real-time but simple
Say This in Interview
"I use EmpApi to subscribe to Platform Events or Change Data Capture — when the server publishes an event, my LWC receives it instantly and updates its reactive property, triggering re-render without any page reload."
Q31How do you conditionally show/hide fields in LWC based on user profile? Medium
๐Ÿ‘คCall an Apex method to retrieve the user's profile or permission set, store it in a reactive property, and use if:true/if:false or lwc:if directives in HTML to conditionally render sections.
// Apex @AuraEnabled(cacheable=true) public static Boolean isManager() { return [SELECT Profile.Name FROM User WHERE Id = :UserInfo.getUserId()] .Profile.Name == 'Sales Manager'; } // LWC HTML <template lwc:if={isManager}> <c-manager-section></c-manager-section> </template>
Say This in Interview
"I call an Apex method with cacheable=true to get the user's profile, store it as a reactive property, and use lwc:if in HTML to conditionally render sections — always enforce access server-side too, not just in the UI."
Q32Your LWC works in sandbox but breaks in Production — why? Medium
⚠️Usually a missing static resource, hardcoded sandbox ID, permission difference, or a cached version of the component in Production. Always test in a Full Copy sandbox before Production deployment.
๐Ÿ” Common Causes
  • 1️⃣Missing dependency: Static resource, custom label, or custom metadata not included in Change Set
  • 2️⃣Hardcoded sandbox IDs: Recordtype IDs, pricebook IDs — always query dynamically
  • 3️⃣Profile differences: User in sandbox has System Admin; Production user has different profile with missing FLS
  • 4️⃣Browser cache: Ask user to hard-refresh (Ctrl+Shift+R) before concluding it's broken
  • 5️⃣API version mismatch: Component using features not available in Production API version
Say This in Interview
"LWC breaking in Production but not sandbox usually means a missing dependency in the Change Set, a hardcoded sandbox-specific ID, or a profile permission difference — I'd check browser console errors first, then validate each dependency was included in the deployment."
Q33How do you handle Apex errors gracefully in LWC? Medium
๐ŸšจCatch errors in the wire handler or imperative call's catch block, store them in a reactive property, and display them using a dedicated error section in the HTML template — never let errors fail silently.
// JS — handle errors from imperative Apex call handleLoad() { getOrderData({ orderId: this.recordId }) .then(result => { this.orderData = result; this.error = undefined; }) .catch(error => { this.error = error?.body?.message || 'Unknown error occurred'; this.orderData = undefined; }); } <!-- HTML --> <template lwc:if={error}> <div class="error-msg">{error}</div> </template>
Say This in Interview
"I catch errors in the .catch() block of imperative Apex calls, extract the message from error.body.message, store it in a reactive property, and display it in a dedicated error section in the template — errors must always be visible to users, never silently swallowed."
Q34How do you make an LWC available in Experience Cloud (Community)? Medium
๐ŸŒConfigure the LWC's .js-meta.xml to include "lightningCommunity__Page" in the targets, then set the Guest User profile permissions for the objects and fields the component accesses.
<!-- componentName.js-meta.xml --> <LightningComponentBundle> <apiVersion>59.0</apiVersion> <isExposed>true</isExposed> <targets> <target>lightningCommunity__Page</target> <target>lightningCommunity__Default</target> </targets> </LightningComponentBundle>
✅ Critical Steps Often Missed
  • Set Guest User profile to have READ access to objects and fields the LWC queries
  • Apex methods used must have without sharing for public-facing data or respect Guest User sharing
  • Never expose sensitive data via Experience Cloud LWC — Guest User can access it unauthenticated
Say This in Interview
"Add lightningCommunity__Page to the component's targets in meta.xml to make it available in Experience Builder, then configure Guest User profile permissions — missing Guest User access is the most common reason community LWCs show blank or error."
Q35How do you implement a multi-step form wizard in LWC? Advanced
๐Ÿ“‹Use a currentStep reactive property to control which step renders using lwc:if. Store form data in a shared object across steps, validate each step before advancing, and submit all data in one Apex call on the final step.
// JS — Step controller @track currentStep = 1; @track formData = { name: '', email: '', address: '' }; nextStep() { if(this.validateStep(this.currentStep)) { this.currentStep++; } } submitForm() { saveFormData({ data: JSON.stringify(this.formData) }); } <!-- HTML --> <template lwc:if={isStep1}><c-step-one data={formData}></c-step-one></template> <template lwc:if={isStep2}><c-step-two data={formData}></c-step-two></template>
Say This in Interview
"I use a currentStep integer to control which step renders via lwc:if, accumulate form data in a shared tracked object passed down to child step components, validate before advancing, and make a single Apex call on the final step to save all data atomically."
Q36Your LWC is causing performance issues on mobile — how do you fix it? Advanced
๐Ÿ“ฑMobile performance issues in LWC are usually caused by rendering too many DOM nodes, no lazy loading, heavy images, or too many wire adapters running on connectedCallback.
✅ Performance Fixes
  • Pagination/infinite scroll: Never render 1000 records — load 20 at a time
  • Lazy loading: Load child components and data only when needed (on scroll or interaction)
  • Debounce search: Add 300ms debounce on search inputs to avoid firing Apex on every keystroke
  • cacheable=true: Use cacheable Apex methods — results are client-cached reducing repeat network calls
  • Reduce DOM nodes: Avoid deeply nested templates, flatten component structure
Say This in Interview
"Mobile LWC performance issues come from too much DOM — I'd add pagination to limit rendered records, lazy-load child components, debounce all search inputs, use cacheable=true on wire adapters, and measure with Salesforce's LWC Performance Analysis tool."
Q37How do you implement file upload with preview in LWC? Medium
๐Ÿ“ŽUse the lightning-file-upload base component for simple uploads. For custom preview, handle the onuploadfinished event to get the ContentDocumentId, then use NavigationMixin or a URL formula to display the preview.
<!-- Simple upload with preview --> <lightning-file-upload label="Upload Document" accept=['.pdf','.jpg','.png'] record-id={recordId} onuploadfinished={handleUpload}> </lightning-file-upload> // JS — handle after upload handleUpload(event) { const files = event.detail.files; this.uploadedFileId = files[0].contentVersionId; this.previewUrl = `/sfc/servlet.shepherd/version/download/` + this.uploadedFileId; }
Say This in Interview
"I use lightning-file-upload base component linked to the recordId for automatic ContentDocument attachment, handle the onuploadfinished event to get the ContentVersionId, and construct a preview URL using Salesforce's file servlet for the image preview."
Q38How do you implement infinite scroll in an LWC datatable? Advanced
⬇️Use lightning-datatable's enable-infinite-loading attribute with the onloadmore event. When the user scrolls to the bottom, load the next page of records via Apex and append to the existing list.
<!-- HTML --> <lightning-datatable data={records} columns={columns} enable-infinite-loading onloadmore={loadMore} is-loading={isLoading}> </lightning-datatable> // JS loadMore(event) { this.isLoading = true; this.offset += 50; getMoreRecords({ offset: this.offset }) .then(result => { this.records = [...this.records, ...result]; if(result.length < 50) event.target.enableInfiniteLoading = false; }).finally(() => { this.isLoading = false; }); }
Say This in Interview
"I use lightning-datatable's enable-infinite-loading with an onloadmore handler — when triggered, I call Apex with an incrementing offset, append the results to the existing records array using spread operator, and disable infinite loading when Apex returns fewer results than the page size."
Q39How do you implement drag-and-drop in LWC? Advanced
๐Ÿ–ฑ️Use the HTML5 Drag and Drop API — handle dragstart, dragover, and drop events in JavaScript. Store the dragged item's ID in the dataTransfer object and process the drop to update the Salesforce record.
<!-- HTML --> <div draggable="true" ondragstart={handleDragStart}>Drag me</div> <div ondragover={handleDragOver} ondrop={handleDrop}>Drop here</div> // JS handleDragStart(event) { event.dataTransfer.setData('recordId', event.target.dataset.id); } handleDragOver(event) { event.preventDefault(); } handleDrop(event) { event.preventDefault(); const id = event.dataTransfer.getData('recordId'); updateRecordStatus({ recordId: id, newStatus: event.target.dataset.status }); }
Say This in Interview
"I use the HTML5 Drag and Drop API — dragstart stores the record ID in dataTransfer, dragover prevents default browser handling, and drop reads the ID from dataTransfer and calls an Apex method to update the record's position or status."
๐Ÿ”
Security & Sharing
Record access, profiles, permission sets — fundamentals every Salesforce pro must own
Q40–Q49
Q40A user can't see a record they should have access to — step-by-step troubleshooting? Easy
๐Ÿ”Record visibility issues have a clear order of troubleshooting — start with OWD, then role hierarchy, then sharing rules, then manual sharing, then Apex sharing. Use the "Why can't I see this record?" tool in Setup.
๐Ÿ” Troubleshooting Order
  • 1
    Check OWD for the object — if Private, user needs explicit access beyond ownership
  • 2
    Check Role Hierarchy — is user's role above or related to the record owner's role?
  • 3
    Check Sharing Rules — does any criteria-based or ownership-based rule grant access?
  • 4
    Check Manual Sharing — was the record shared directly with the user?
  • 5
    Check Teams — Account Teams, Opportunity Teams
  • 6
    Check Profile/Permission Set — does user have object-level READ permission?
  • 7
    Use Setup → Record Access → "Why can't I see this?" tool for definitive answer
Say This in Interview
"I check in order: OWD → Role Hierarchy → Sharing Rules → Manual Sharing → Profile object permissions. Salesforce provides the 'Why can't I see this record?' tool in Setup that gives the exact reason — I always use that first to save time."
Q41Your OWD is Private but users need to collaborate on records — what's your approach? Medium
๐ŸคKeep OWD Private for security, then layer access using Sharing Rules, Teams, or Apex Managed Sharing. Never change OWD to Public just for convenience — it opens all records to everyone.
Sharing MethodUse WhenMaintainability
Sharing RulesRule-based access (same region, same team)✅ Easy
Account/Opp TeamsRecord-specific collaboration✅ Medium
Manual SharingAd-hoc, one-off sharing⚠️ Hard to audit
Apex Managed SharingComplex business rules for sharing✅ Best for complex
๐Ÿญ XYZ Company

Our Orders OWD is Private — only the assigned Business Development rep can see their orders. When International Marketing needs to view the same orders, we use a Sharing Rule based on "Order Region = International" to grant Read access to the International Marketing team role. Zero code, fully auditable.

Say This in Interview
"Keep OWD Private and layer access using Sharing Rules for rule-based patterns, Teams for record-level collaboration, and Apex Managed Sharing for complex business logic — never loosen OWD just because a few users need access."
Q42How do you implement Apex Managed Sharing for complex business rules? Advanced
⚙️Create Share records (e.g. Order__Share) programmatically in Apex — insert them with the User/Group ID, record ID, access level (Read/Edit), and RowCause = Schema.Order__Share.rowCause.Manual.
// Share an Order record with a specific user Order__Share shareRecord = new Order__Share(); shareRecord.ParentId = orderId; // Record to share shareRecord.UserOrGroupId = userId; // Who gets access shareRecord.AccessLevel = 'Read'; // Read / Edit shareRecord.RowCause = Schema.Order__Share.rowCause.Manual; Database.SaveResult sr = Database.insert(shareRecord, false); if(!sr.isSuccess()) { // Handle error — usually means access already exists }
Say This in Interview
"Apex Managed Sharing inserts Share records programmatically — setting the ParentId, UserOrGroupId, AccessLevel, and RowCause. I typically trigger this from an after-insert trigger or batch job, and use Database.insert with allOrNone=false to handle cases where sharing already exists."
Q43Profile vs Permission Set — which do you use in 2026 and why? Medium
๐Ÿ”In 2026, use Permission Sets and Permission Set Groups as the primary access control mechanism. Salesforce is officially migrating away from Profiles — permissions will eventually be removed from Profiles entirely.
FactorProfilePermission Set
User assignmentOne profile per userMultiple per user
Salesforce direction 2026⚠️ Being deprecated✅ Future standard
GranularityBroadSpecific, additive
Deployment⚠️ Problematic across orgs✅ Clean deployment
Best practice 2026Minimal profile (baseline only)All permissions via Perm Sets
Say This in Interview
"In 2026 I use minimal Profiles for basic login/record type access only, and manage all permissions via Permission Sets and Permission Set Groups — Salesforce is actively deprecating permissions from Profiles, so this approach is both current best practice and future-proof."
Q44How do you prevent a specific user from deleting records they own? Medium
๐ŸšซRemove "Delete" permission from the user's Profile or Permission Set for that object. If you need more granular control (e.g. prevent delete after a certain stage), use a Validation Rule or Before-Delete Trigger.
✅ Options
  • 1️⃣Profile/Permission Set: Remove Delete object permission — user can't delete any record of that type
  • 2️⃣Validation Rule on Before Delete: Not natively supported — Validation Rules don't fire on delete
  • 3️⃣Before-Delete Trigger: Check conditions (e.g. Status = Approved) and call Trigger.new[0].addError() to block deletion
trigger PreventOrderDelete on Order__c (before delete) { for(Order__c o : Trigger.old) { if(o.Status__c == 'Approved') { o.addError('Cannot delete an Approved Order.'); } } }
Say This in Interview
"To prevent all deletions, remove the Delete permission from the Profile or Permission Set. For conditional prevention (e.g. only when Status = Approved), use a before-delete trigger calling addError() — Validation Rules don't fire on delete events."
Q45How do you implement territory-based record access? Advanced
๐Ÿ—บ️Use Enterprise Territory Management — assign Accounts to Territories, assign Users to Territories, and configure Territory-based sharing to automatically grant access to records within a territory.
✅ Territory Management Setup
  • 1
    Enable Enterprise Territory Management: Setup → Territories → Enable
  • 2
    Create Territory hierarchy (North India → Gujarat → Ahmedabad)
  • 3
    Assign Accounts to Territories via assignment rules or manually
  • 4
    Assign Users to Territories — they get access to all Accounts in that territory
  • 5
    Configure Opportunity access via territory model settings
๐Ÿญ XYZ Company

XYZ Company has territories by region — North, South, West, International. We assign Accounts to territories so each BD rep only sees accounts in their region. The International Marketing team has an International territory that overlaps multiple BD territories, giving them cross-region visibility without changing OWD.

Say This in Interview
"Enterprise Territory Management assigns Accounts to geographic or business territories, assigns users to those territories, and automatically shares all related records — giving each team exactly the access their territory warrants without manual sharing rules per user."
Q46Client wants full audit trail for all field changes — what do you implement? Medium
๐Ÿ“‹Use Field Audit Trail (paid feature) for up to 10 years of field history, or standard Field History Tracking (free, up to 20 fields, 18 months retention) for most use cases.
FeatureField History TrackingField Audit Trail
CostFreePaid add-on
Retention18 monthsUp to 10 years
Max fields tracked20 per object60 per object
ComplianceBasicGDPR, HIPAA, SOX
Say This in Interview
"For most use cases I enable Field History Tracking — free, up to 20 fields, 18-month retention. For compliance requirements like GDPR or SOX needing years of history, I recommend Field Audit Trail which retains up to 10 years and supports 60 fields per object."
Q47How do you test if your sharing rules are working correctly? Easy
๐ŸงชUse the "Login As" feature to test as the specific user, check their record list views, and use Setup → Record Access → "Who can see this record?" to verify sharing grants.
✅ Testing Methods
  • Login As user: Setup → Users → Login next to the user → check what records they see
  • Record Access tool: On any record → Sharing button → "Who has access?" shows every user and why
  • SOQL in Workbench: Run as the user (using their session) to verify query returns expected records
  • Recalculate Sharing: After rule changes, run Setup → Sharing Settings → Recalculate to ensure rules applied
Say This in Interview
"I use Login As to verify from the user's exact perspective, the Record's Sharing button to see who has access and why, and always trigger Recalculate Sharing after changing sharing rules to ensure they've been applied to existing records."
Q48How do you restrict a user from seeing records in a specific country without Territory Management? Medium
๐ŸŒSet OWD to Private, then use Criteria-Based Sharing Rules to grant access only to records where the Country field matches the user's region — users only see records in their country, not others.
✅ Implementation
  • 1️⃣Set Account OWD to Private (users only see accounts they own)
  • 2️⃣Create Sharing Rule: Share Accounts WHERE BillingCountry = 'India' TO India_Sales_Team (role/group)
  • 3️⃣Repeat per country/region with respective role groups
  • 4️⃣Users in India team see all Indian accounts but not USA/UK accounts
Say This in Interview
"Private OWD plus Criteria-Based Sharing Rules per country — each rule grants the regional role group access to records where BillingCountry matches their region. Simple, no code, fully configurable by admins."
Q49How do you handle record-level security in Apex — with sharing vs without sharing? Advanced
⚖️with sharing enforces the running user's sharing rules in SOQL queries. without sharing ignores sharing rules and returns all records. Inherit sharing (default) inherits the calling context's sharing rules.
KeywordWhat It DoesUse When
with sharingEnforces user's sharing rules in SOQLUser-facing methods (most cases)
without sharingIgnores sharing — returns all recordsSystem/admin operations, batch jobs
inherited sharingUses caller's sharing contextUtility classes called from both contexts
Say This in Interview
"I use 'with sharing' for all user-facing Apex to respect the running user's record access. 'without sharing' only for admin/system operations like batch jobs that need full data access. 'inherited sharing' for utility classes so they adapt to whatever context calls them."
๐Ÿ”—
Integration
REST, SOAP, Platform Events, Named Credentials — enterprise-level integration scenarios
Q50–Q59
Q50Your REST API integration is intermittently failing — how do you debug? Medium
๐Ÿ”ŒIntermittent failures are harder to debug than consistent failures. Capture full request/response logs, check for rate limiting, timeout patterns, and whether failures correlate with specific data or times of day.
๐Ÿ” Debug Steps
  • 1
    Enable Debug Logs → capture the Apex callout request and full response including headers and status code
  • 2
    Check response status codes — 429 = rate limited, 503 = external server down, 408 = timeout
  • 3
    Reproduce with Postman/Workbench REST Explorer — is the external API itself reliable?
  • 4
    Check for time-pattern — failures at specific hours suggest server maintenance or peak load on external system
  • 5
    Add logging to a custom object — capture request payload, response, timestamp, and status for each call
  • 6
    Implement retry logic for transient failures (see Q18)
๐Ÿญ XYZ Company — IndiaMART Integration

Our IndiaMART integration had intermittent 503 errors. Debug logs showed the failures always occurred between 2-3 AM IST — IndiaMART's maintenance window. We added a 3-hour scheduled retry window and a Custom Object log to track all API calls, which gave us full visibility and zero data loss.

Say This in Interview
"I capture full request/response in debug logs, check status codes for patterns (rate limits = 429, timeouts = 408), reproduce independently in Postman, correlate failures to specific times or data patterns, and add a custom logging object for persistent audit trail."
Q51How do you handle API rate limits from an external system? Medium
⏱️Implement exponential backoff retry logic, use Queueable chaining with delays, batch requests to stay under limits, and monitor remaining quota from API response headers before making calls.
✅ Rate Limit Handling Strategy
  • Check response headers: Most APIs return X-RateLimit-Remaining — check before next call
  • Exponential backoff: On 429, wait 1s → retry → wait 2s → retry → wait 4s → give up
  • Queueable with delay: Enqueue next batch only after a delay window
  • Batch requests: Consolidate multiple records into one API call if external system supports bulk endpoints
  • Platform Events: Decouple processing — receive all records instantly, process at controlled rate
Say This in Interview
"I read the rate limit headers from each response, implement exponential backoff on 429 responses using Queueable chaining, batch requests to stay under limits, and use Platform Events to decouple receipt from processing so we can pace the actual API calls."
Q52REST vs SOAP vs Platform Events — when do you use which? Medium
⚖️REST for modern integrations (JSON, mobile, APIs), SOAP for legacy enterprise systems (XML, strict contracts), Platform Events for real-time async event-driven integration where decoupling is essential.
ProtocolUse WhenFormat
REST APIMobile apps, modern SaaS, web APIsJSON
SOAP APILegacy ERP/SAP, banking systems, strict WSDL contractsXML
Platform EventsReal-time pub/sub, decoupled async integrationJSON
Bulk APILarge data loads (millions of records)CSV/JSON
Streaming APIPush notifications to clients in real-timeJSON
๐Ÿญ XYZ Company

IndiaMART integration → REST API (modern JSON endpoints). Business Central integration → SOAP API (legacy Microsoft system with strict WSDL contracts). Order status updates to warehouse → Platform Events (decouple Salesforce from warehouse system, both sides process independently).

Say This in Interview
"REST for modern JSON-based integrations, SOAP for legacy enterprise systems with strict contracts, Platform Events for async event-driven architecture where sender and receiver must be decoupled and independently scalable."
Q53Your Named Credential stopped authenticating after sandbox refresh — how do you fix? Easy
๐Ÿ”‘Named Credential authentication tokens and certificates are cleared during sandbox refresh. You must re-authenticate or re-enter credentials after every refresh — this is expected behaviour and part of the post-refresh checklist.
✅ Fix Steps
  • 1
    Setup → Named Credentials → find the credential that's failing
  • 2
    Click Edit → re-enter username/password or re-authenticate OAuth flow
  • 3
    For OAuth: click "Start Authentication Flow" → complete auth in popup
  • 4
    Test with Workbench REST Explorer using the Named Credential endpoint
  • 5
    Add Named Credential re-auth to your post-refresh checklist!
Say This in Interview
"Named Credentials lose their auth tokens on sandbox refresh — it's expected behaviour. I re-authenticate immediately after every refresh as part of my post-refresh checklist, and I update sandbox Named Credentials to point to test/sandbox endpoints of external systems, not production APIs."
Q54How do you implement a webhook receiver in Salesforce? Advanced
๐Ÿ“จExpose a Salesforce REST API endpoint using an @RestResource Apex class. The external system calls this endpoint (via Site or Connected App) and your Apex class processes the incoming payload.
@RestResource(urlMapping='/webhook/order/*') global with sharing class WebhookReceiver { @HttpPost global static String receiveOrder() { RestRequest req = RestContext.request; String body = req.requestBody.toString(); Map<String,Object> payload = (Map<String,Object>) JSON.deserializeUntyped(body); // Process the webhook payload String orderId = (String) payload.get('order_id'); // Create/update records... return JSON.serialize(new Map<String,String>{ 'status' => 'received' }); } }
Say This in Interview
"I expose a webhook receiver using @RestResource with @HttpPost — the external system posts to my Salesforce URL, the Apex class parses the JSON payload from RestContext.request, processes the data, and returns a success acknowledgement. I'd expose this via a Salesforce Site for unauthenticated access or via Connected App for authenticated webhooks."
Q55How do you prevent duplicate records created by integration? Medium
๐Ÿ”Use a combination of Duplicate Rules for real-time prevention, an External ID field for upsert-based integration, and a before-insert trigger for complex duplicate logic that standard rules can't handle.
✅ Defence in Depth
  • 1️⃣External ID field: Create ExternalSystem_ID__c field marked as External ID — use Database.upsert() instead of insert to auto-handle duplicates
  • 2️⃣Duplicate Rules: Create Matching Rules on email/phone/name — Duplicate Rules block or warn on matches
  • 3️⃣Before-insert trigger: Query by External ID before insert, update if exists instead of creating new
๐Ÿญ XYZ Company

IndiaMART sends leads with a unique lead_id. I created ExternalId__c on Lead object and use Database.upsert() with ExternalId__c as the external ID field. If the same lead comes twice, Salesforce automatically updates the existing Lead instead of creating a duplicate — zero code for duplicate prevention.

Say This in Interview
"Best approach is External ID + Database.upsert() — zero duplicate risk since Salesforce automatically matches on the external key. For systems without unique IDs, I combine Duplicate Rules for real-time blocking with a before-insert trigger for custom matching logic."
Q56Client wants real-time data sync between Salesforce and SAP — how do you architect this? Advanced
๐Ÿ—️Use Platform Events or Change Data Capture from Salesforce side, with MuleSoft or a middleware layer for bidirectional sync with SAP. Never build direct point-to-point integrations between two enterprise systems.
ComponentPurpose
Change Data Capture (CDC)Detect changes in Salesforce and publish events
Platform EventsCustom events from Salesforce side
MuleSoft / Azure IntegrationMiddleware that transforms Salesforce events to SAP format
SAP API GatewaySAP-side receiver for transformed payloads
Error handling queueDead letter queue for failed sync attempts
Say This in Interview
"I'd use Change Data Capture to publish Salesforce changes as events, route them through MuleSoft for transformation to SAP's data model, and implement a dead letter queue for failures — never building point-to-point since it becomes impossible to maintain as both systems evolve."
Q57How do you handle large payloads in Apex callouts that exceed limits? Advanced
๐Ÿ“ฆSalesforce limits callout response size to 12MB (async). For larger payloads, implement pagination on the external API side, process in chunks using Queueable chaining, or use Salesforce Files to store the response externally.
✅ Strategies
  • API pagination: Request data in pages (e.g. 100 records per call) using offset/cursor pagination
  • Queueable chaining: Process page 1 → chain Queueable for page 2 → chain for page 3 etc.
  • Streaming download: External API generates file → store URL → Salesforce retrieves file in chunks
  • External Objects: For read-heavy data, use Salesforce Connect to query external system directly without copying data
Say This in Interview
"For large payloads I implement API pagination — request one page at a time using Queueable chaining where each job processes a page and chains the next. This keeps each callout well within the 12MB limit and gives full governor limit resets between pages."
Q58How do you monitor failed integration records and retry them? Medium
๐Ÿ“ŠCreate a custom Integration Log object that captures every attempt — request, response, status, timestamp. Build a Scheduled Batch that queries failed records and retries them automatically.
✅ Integration Monitoring Pattern
  • Integration_Log__c object: Fields: Status__c, Request_Payload__c, Response_Body__c, Error_Message__c, Retry_Count__c, Last_Attempted__c
  • After each callout: Insert/update log record with result
  • Scheduled retry batch: Every hour, query WHERE Status__c = 'Failed' AND Retry_Count__c < 3 → retry
  • Alert on max retries: When Retry_Count__c = 3, send email to integration team and set Status = 'Manual Review'
Say This in Interview
"I create a custom Integration Log object capturing every attempt's status, payload, and error. A scheduled batch queries failed records hourly, retries up to 3 times with exponential backoff, then flags for manual review after max retries — giving full visibility and zero silent failures."
Q59How do you secure an inbound API call from an external system to Salesforce? Advanced
๐Ÿ”’Use Connected App with OAuth 2.0 for authenticated access, or IP filtering + API token validation for simpler scenarios. Never expose a Salesforce REST endpoint without authentication.
Security MethodHowBest For
OAuth 2.0 JWT BearerExternal system gets access token, passes in headerServer-to-server (most secure)
Connected App + Client CredentialsClient ID + Secret for tokenTrusted internal systems
IP FilteringRestrict inbound IPs in Connected App settingsAdditional layer of security
Custom Auth HeaderValidate custom token in Apex before processingSimple webhooks
Say This in Interview
"For server-to-server integrations I use OAuth 2.0 JWT Bearer flow via a Connected App — the external system authenticates with a signed JWT to get a short-lived access token. I combine this with IP filtering in the Connected App settings for defence in depth."
๐Ÿ› ️
DevOps & Deployment
The rising star of 2026 interviews — every senior role asks about DevOps now
Q60–Q69
Q60Your Change Set deployment failed due to test class failure — exact steps? Medium
๐ŸšจRead the error message in full. Test failures during deployment are one of the most common Salesforce problems — they require a structured diagnostic approach, not random fixes.
๐Ÿ” Exact Steps
  • 1
    In Production deployment: click on the deployment → read the EXACT test class name and line number that failed
  • 2
    Go to Dev sandbox → Developer Console → Test → Run Only that specific test class
  • 3
    Reproduce the failure — if it passes in sandbox, the issue is sandbox vs Production data differences
  • 4
    Check if it's YOUR code causing the failure, or an unrelated pre-existing broken test
  • 5
    Fix the root cause — update test class to not depend on org data, fix the code logic
  • 6
    Add the fixed test class to your Change Set → re-upload → Validate again → Quick Deploy
Say This in Interview
"I read the exact failing test class and line number from the deployment error, reproduce it in sandbox, fix whether it's my new code or an existing broken test, add the fix to the Change Set, validate again, and use Quick Deploy to skip re-running all tests."
Q61How do you delete an Apex class from Production? Advanced
๐Ÿ’ฅChange Sets CANNOT delete Apex classes. You need SFDX CLI with a destructiveChanges.xml file, or Workbench Metadata API deploy with a destructive manifest.
<!-- destructiveChanges.xml --> <Package xmlns="http://soap.sforce.com/2006/04/metadata"> <types> <members>OldUnusedClass</members> <name>ApexClass</name> </types> <version>59.0</version> </Package> # SFDX CLI command sf project deploy start \ --manifest package.xml \ --post-destructive-changes destructiveChanges.xml \ --target-org Production
Say This in Interview
"Change Sets can't delete Apex classes — I use SFDX CLI with destructiveChanges.xml listing the class to delete, alongside an empty package.xml. Always validate with --dry-run first since there's no undo after a destructive deployment."
Q62How do you handle a bad deployment when there's no rollback in Salesforce? Advanced
๐Ÿ†˜No rollback exists in Salesforce — prevention is everything. If something goes wrong, your options are: deploy a fix, deactivate the problem component, or manually revert. Speed and preparation are critical.
๐Ÿšจ Recovery Options
  • 1️⃣Deploy a fix: Fix in sandbox → new Change Set → validate → Quick Deploy (fastest)
  • 2️⃣Deactivate component: Flows, Validation Rules, Triggers can be deactivated immediately in Setup
  • 3️⃣Revert Apex: Keep old code in a Git branch → deploy the previous version
  • 4️⃣Manual data fix: If bad data was saved, use Data Loader to correct affected records
✅ Prevention is Better
  • Always Validate before Deploy — catch issues without committing
  • Deploy during off-peak hours — maximises time to fix if something goes wrong
  • Have a break-glass plan ready — know exactly what you'll deactivate if things break
  • Use Git — always have the previous working version ready to deploy
Say This in Interview
"No rollback means prevention is everything — always validate first, deploy off-peak, and have a break-glass plan ready. If something does go wrong, deactivate the problem component immediately, fix in sandbox, and Quick Deploy the fix."
Q63How do you deploy between two Salesforce orgs that aren't connected? Medium
๐Ÿ”„Use Workbench or SFDX CLI — both use Metadata API which works between any two Salesforce orgs, regardless of whether a Deployment Connection is configured between them.
✅ Methods
  • Workbench: Retrieve ZIP from source → login to target → deploy ZIP (simplest for one-time)
  • SFDX CLI: sf project retrieve start from source, sf project deploy start to target
  • Ant Migration Tool: For legacy pipelines that already use Ant
  • Change Sets: Requires pre-configured Deployment Connection — won't work between unconnected orgs
Say This in Interview
"Change Sets require a pre-configured Deployment Connection — for unconnected orgs I use Workbench to retrieve a ZIP from the source and deploy to the target, or SFDX CLI which authenticates to any org independently and deploys via Metadata API."
Q64How do you set up a basic CI/CD pipeline for Salesforce from scratch? Advanced
๐Ÿ”„Connect your Git repository to a CI tool (GitHub Actions/Jenkins), configure authentication via JWT-based OAuth, run tests automatically on every PR, and auto-deploy to sandbox on merge — manual approval gate before Production.
๐Ÿ”„ Pipeline Architecture
StageTriggerAction
Feature BranchDeveloper pushes codeSFDX validate against CI sandbox
Pull RequestPR openedRun all tests, code review required
Merge to DevPR mergedAuto-deploy to Dev sandbox
UAT DeployManual triggerDeploy to UAT, run regression
ProductionManager approvalQuick Deploy (tests already passed)
Say This in Interview
"A Salesforce CI/CD pipeline uses Git as source of truth, GitHub Actions for automation, JWT-based OAuth for headless Salesforce authentication, SFDX for deploy/validate commands, and a manual approval gate before Production — every commit is tested automatically, removing the manual validation bottleneck."
Q65What is Salesforce DevOps Center and when would you use it? Medium
๐Ÿ—️DevOps Center is Salesforce's native tool that provides a Git-connected, pipeline-based deployment experience directly in Setup — without needing external CI/CD tools. It's the middle ground between Change Sets (no Git) and SFDX (requires CLI expertise).
ToolGit?Skill RequiredBest For
Change SetsAdminSmall teams, simple deployments
DevOps CenterAdmin/DevTeams wanting Git without full CLI
SFDX + CI/CDSenior DevLarge teams, automated pipelines
Say This in Interview
"DevOps Center is Salesforce's native Git-connected deployment tool — it bridges the gap between Change Sets (no version control) and full SFDX CI/CD (requires significant DevOps expertise), making it ideal for teams that want Git-based deployments without needing a dedicated DevOps engineer."
Q66Your code coverage dropped below 75% after a deployment — what are your immediate steps? Medium
๐Ÿ“‰Below 75% means your next Apex deployment will fail. Fix immediately — find the uncovered classes, write test coverage, and deploy the improved test classes as soon as possible.
๐Ÿ” Immediate Actions
  • 1
    Run all tests in Production via Setup → Apex Classes → Run All Tests → see overall coverage
  • 2
    Tooling API query: SELECT ApexClassOrTrigger.Name, NumLinesCovered, NumLinesUncovered FROM ApexCodeCoverageAggregate ORDER BY NumLinesUncovered DESC
  • 3
    Identify classes with lowest coverage — these are your priority
  • 4
    Write test classes in sandbox achieving 80%+ on each low-coverage class
  • 5
    Deploy test classes only (no code changes needed) — coverage improves immediately
Say This in Interview
"I query ApexCodeCoverageAggregate via Tooling API to find the worst-covered classes, write targeted test classes for them in sandbox, and deploy only the test classes — coverage improves without touching production logic, unblocking future deployments."
Q67How do you manage deployment dependencies in a Change Set? Medium
๐Ÿ”—Always add ALL dependent components to the Change Set — if your Apex class references a Custom Field, that field must be in the Change Set. Use "View/Add Dependencies" in the Change Set to auto-detect missing items.
✅ Dependency Management
  • View Dependencies: In Change Set → click "View/Add Dependencies" — Salesforce lists all related components you might have missed
  • Common missed dependencies: Custom Fields referenced in Apex, Custom Labels in LWC, Static Resources, Custom Metadata used in code
  • Validate first: Run validation — missing dependencies cause deployment failure with clear error messages
  • Don't include everything — only include what's changed + its dependencies. Over-inclusion causes unwanted overwrites
Say This in Interview
"I use 'View/Add Dependencies' in the Change Set to auto-detect missing components, then validate before deploying — validation gives clear 'Component not found' errors that tell me exactly what's missing, and I add those before the actual deploy."
Q68How do you handle hotfix deployments in Production during business hours? Advanced
๐Ÿš‘Hotfixes are high-risk — always validate first, communicate proactively with users, and use Quick Deploy to minimise downtime. Never deploy untested hotfixes directly to Production.
๐Ÿš‘ Hotfix Process
  • 1
    Immediately assess impact — is this blocking all users or just some? Can it wait for off-peak?
  • 2
    If urgent: deactivate the problematic component first (flow, validation rule, trigger) to unblock users
  • 3
    Fix in sandbox — test with same data pattern that caused the issue
  • 4
    Validate in Production (Check Only) — confirm tests pass
  • 5
    Quick Deploy if validation passed within last 10 days — completes in 2-3 minutes
  • 6
    Re-activate the deactivated component, communicate resolution to affected users
Say This in Interview
"For hotfixes I deactivate the breaking component first to unblock users immediately, fix in sandbox, validate in Production to confirm tests pass, then Quick Deploy the fix — total downtime under 30 minutes. Communication before, during, and after is just as important as the technical fix."
Q69What is the difference between Validate and Quick Deploy — and when does Quick Deploy expire? Easy
Validate runs all tests without committing changes (dry run). Quick Deploy uses the successful validation to deploy without re-running tests — saving potentially hours of test execution. Quick Deploy is available for 10 days after a successful validation.
ActionTests Run?Changes Committed?Time
Validate (Check Only)✅ Yes❌ No30-90 mins
Full Deploy✅ Yes✅ Yes30-90 mins
Quick Deploy❌ Skipped✅ Yes2-5 mins ✅
๐Ÿญ XYZ Company Strategy

I validate every Friday afternoon (takes ~45 mins). Over the weekend I get approval from Ankit Nahar. Sunday night I use Quick Deploy — completes in under 5 minutes. Zero risk of tests failing mid-deployment during business hours.

Say This in Interview
"Validate is a dry run — tests run but nothing is committed. Quick Deploy uses that validated state to deploy in minutes without re-running tests. The window is 10 days — my standard process is validate on Friday, Quick Deploy on Sunday night."
๐Ÿ”„
Flows & Automation
Flow is Salesforce's primary automation tool in 2026 — know it deeply
Q70–Q77
Q70When do you use Flow vs Apex Trigger — how do you decide? Medium
๐Ÿค”Start with Flow — if it can do the job cleanly, don't write Apex. Use Apex Trigger when you need: complex SOQL patterns, governor limit control, error handling/retry logic, or processing 1000+ records efficiently.
ScenarioFlowApex Trigger
Send email on record update✅ YesOverkill
Update related records on insert✅ Yes (Record-Triggered)✅ If complex
Bulk data processing 50k+ records⚠️ Can fail✅ Batch Apex
Complex cross-object SOQLLimited✅ Yes
API callout✅ External Service✅ More control
Real-time validation with complex logic⚠️ Possible✅ Cleaner
Say This in Interview
"I default to Flow for automation — it's declarative, admin-maintainable, and faster to deploy. I move to Apex only when the requirement exceeds what Flow handles cleanly: bulk processing at scale, complex SOQL patterns, retry logic, or operations that would make a Flow unmaintainably complex."
Q71Your Flow is firing multiple times and creating duplicates — how do you fix? Medium
๐Ÿ”„Flow recursion usually means a Record-Triggered Flow updates a record which triggers itself again. Fix with entry conditions, a checkbox field to track if the flow has run, or convert relevant logic to Apex with a static variable.
✅ Fix Options
  • 1️⃣Entry Conditions: Add condition "ISCHANGED(relevant_field)" — only fires when that specific field changes, not on every update
  • 2️⃣Flow_Processed__c checkbox: Create a checkbox field, check at Flow entry (skip if true), set to true at end
  • 3️⃣Schedule fast-field update mode: In Flow settings, use "Fast Field Update" (before-save) — can't trigger other flows
  • 4️⃣Migrate to Apex: For complex recursion scenarios, Apex static variable pattern is more reliable
Say This in Interview
"Flow recursion is fixed by adding specific ISCHANGED() entry conditions so it only fires on the relevant field change, not every record update. For complex cases, a 'processed' checkbox field or migrating to Apex with a static variable gives more reliable control."
Q72How do you debug a Flow that's failing silently? Medium
๐Ÿ”Use Flow's built-in Debug feature in Flow Builder to step through the flow with test data, check each element's inputs/outputs, and enable Flow error emails to capture runtime exceptions sent to the org's admin email.
✅ Debugging Steps
  • 1
    Setup → Flows → open failing flow → click Debug button → enter test record data → Run
  • 2
    Step through each element — check the input/output values at each decision and assignment
  • 3
    Check Setup → Email → check admin email for Flow error notifications
  • 4
    Enable Flow Interview Debug Logs via Debug Log levels
  • 5
    Check if Flow is Faulted: SOQL query on FlowInterview object or check Flow's Pause/Resume history
Say This in Interview
"I use Flow Builder's Debug mode with real record IDs to step through every element and inspect input/output values. For runtime failures I check the admin email for Flow error emails, enable debug logs for the running user, and query the FlowInterview object to find faulted interviews."
Q73A Scheduled Flow stopped running — how do you troubleshoot? Medium
Check if the Flow is still Active, verify the schedule conditions are met, check for paused interviews that may have errored, and review admin email for any Flow error notifications from the last run.
๐Ÿ” Troubleshooting Checklist
  • 1️⃣Is it Active? Setup → Flows → verify Status = Active (may have been deactivated after sandbox refresh)
  • 2️⃣Schedule still correct? Check the scheduled path — time, frequency, and criteria
  • 3️⃣Faulted interviews: SOQL: SELECT Id, Status FROM FlowInterview WHERE Status = 'Waiting' or 'Faulted'
  • 4️⃣Admin email: Check for "Flow Error" emails — describes the failing element and error
  • 5️⃣Sandbox refresh: Was the org recently refreshed? Scheduled Flows are deactivated on refresh
Say This in Interview
"First I verify the Flow is still Active — sandbox refreshes deactivate all flows. Then I check for Faulted FlowInterviews via SOQL, review admin error emails, and confirm the schedule conditions still match the expected data pattern."
Q74How do you migrate a Process Builder to Flow in 2026? Medium
๐Ÿ”„Salesforce provides a built-in "Migrate to Flow" tool in Setup that converts Process Builder processes to equivalent Record-Triggered Flows automatically — with a review step before activation.
✅ Migration Steps
  • 1
    Setup → Process Builder → find the process to migrate
  • 2
    Click "Migrate to Flow" button → Salesforce auto-generates equivalent Record-Triggered Flow
  • 3
    Review the generated Flow — verify logic matches original Process Builder exactly
  • 4
    Test in sandbox with same scenarios that triggered the Process Builder
  • 5
    Activate the new Flow → deactivate the old Process Builder
Say This in Interview
"Salesforce provides a built-in 'Migrate to Flow' tool in the Process Builder list view that auto-converts to a Record-Triggered Flow. I review the generated flow, test in sandbox, activate the Flow, then deactivate the Process Builder — Process Builder is deprecated so this migration is now urgent."
Q75Client wants automated multi-step onboarding emails — what do you build? Medium
๐Ÿ“งBuild a Scheduled Record-Triggered Flow on Account/Contact creation — Day 1 email immediately, Day 3 email via scheduled path with wait, Day 7 email via another scheduled path. All declarative, no code.
Flow Design: Trigger: Contact Created (After Save) ↓ Send Email (Day 1 Welcome) — immediately ↓ Wait 3 Days (Scheduled Path) ↓ Send Email (Day 3 — Tips & Resources) ↓ Wait 7 Days (Scheduled Path) ↓ Check: Has contact logged in? (Decision) ↓ No ↓ Yes Send Nudge Send "You're all set!" email
Say This in Interview
"I'd build a Record-Triggered Flow with Scheduled Paths for each delay — Day 1 immediate email, Day 3 wait path, Day 7 wait path with a decision branch based on engagement data. All declarative, deployed in 30 minutes, and easily modified by admins without code changes."
Q76How do you handle Flow errors gracefully — prevent them from breaking the user's transaction? Advanced
๐Ÿ›ก️Add a Fault Path to every Flow element that could fail (DML operations, callouts, subflows). Connect each Fault Path to a Create Record element that logs the error, allowing the main transaction to complete successfully.
✅ Fault Path Pattern
  • Every DML element and callout should have a Fault connector drawn to a fault-handling path
  • Fault path → Create a Flow_Error_Log__c record with the fault message ({!$Flow.FaultMessage})
  • Send email to admin with error details so nothing fails silently
  • ⚠️If the Flow error is critical (e.g. payment failed), show a screen with a user-friendly message instead of a generic error
Say This in Interview
"I connect every DML and callout element to a Fault Path that logs the error to a custom object using {!$Flow.FaultMessage} and sends an admin notification — the user transaction completes, errors are captured, and nothing fails silently."
Q77Client needs complex approval with multiple conditions, parallel approvers, and auto-escalation — Flow or Approval Process? Advanced
Use Salesforce Approval Process for structured multi-step approvals with parallel approvers — it's purpose-built for this. Use Flow to trigger the approval submission and handle post-approval actions.
RequirementApproval ProcessFlow
Sequential approval steps✅ Built-inComplex to build
Parallel approvers (any/all)✅ Built-inVery complex
Auto-escalation after X hours✅ Built-inScheduled Flow + logic
Custom approval criteria✅ Formula rules✅ More flexible
Trigger the approval❌ Needs trigger✅ Can submit via Apex action
Say This in Interview
"Approval Process handles multi-step, parallel approver, auto-escalation scenarios natively. I'd build the Approval Process for the approval logic and use a Record-Triggered Flow to automatically submit for approval when conditions are met — each tool doing what it does best."
☁️
Data Cloud
The hottest new skill — scenarios you'll face in Data Cloud roles in 2026
Q78–Q85
Q78Client has customer data in 5 different systems — how does Data Cloud unify it? Medium
☁️Data Cloud ingests data from all 5 systems via Connectors, maps each source to Data Lake Objects (DLO), normalises into Data Model Objects (DMO), then runs Identity Resolution to match records across sources into a Single Unified Individual profile.
StepToolWhat Happens
1. IngestConnectors (CRM, S3, API)Raw data lands in Data Lake Objects
2. MapData Mapping (DLO → DMO)Standardise fields across sources
3. UnifyIdentity ResolutionMatch John Smith across all 5 systems
4. ProfileUnified IndividualOne golden record per customer
5. ActivateSegments + ActionsSend unified profile to Marketing/Agent
Say This in Interview
"Data Cloud ingests all 5 systems via Connectors into Data Lake Objects, maps fields to normalised Data Model Objects, then runs Identity Resolution to match the same customer across all systems — resulting in one golden Unified Individual profile with the complete customer history."
Q79What happens when Identity Resolution finds conflicting data — which source wins? Advanced
⚖️You configure Reconciliation Rules in Data Cloud to define which source wins for each field when conflicts exist. Common strategies: Last Updated Wins, Source Priority (CRM > Marketing > Web), or Most Frequent value.
Reconciliation StrategyHow It WorksUse When
Last Updated WinsMost recently modified source's value is usedContact details, addresses
Source PriorityYou rank sources — highest rank winsCRM data trusted more than web tracking
Most FrequentValue appearing in most sources winsName, demographic data
Say This in Interview
"Identity Resolution conflicts are resolved by Reconciliation Rules — I configure these per field. For email, I'd use Source Priority with CRM ranked highest. For address, Last Updated Wins. For name, Most Frequent — each field gets the strategy that makes the most business sense."
Q80How do you validate that Identity Resolution worked correctly? Advanced
๐ŸงชCheck the Identity Resolution job results in Data Cloud, review the match/merge statistics, query Unified Individuals for known test records, and compare profile counts before and after resolution runs.
✅ Validation Steps
  • Data Cloud → Identity Resolution → check run results: records processed, matched, merged, unmatched
  • Query Unified Individual for a known test person — verify all source records are linked
  • Check match rate — if too low, rules may be too strict; too high, rules may be creating false merges
  • Validate reconciliation results — check that the right source won for key fields like email and phone
  • Sample 100 unified profiles manually to spot-check accuracy
Say This in Interview
"I validate by checking IR job statistics, querying known test individuals to confirm their source records merged correctly, reviewing the match rate (too low = rules too strict, too high = false merges), and manually spot-checking 100 profiles to confirm reconciliation picked the right source values."
Q81Client wants to activate a Data Cloud segment to Marketing Cloud — walk me through it? Medium
๐ŸŽฏCreate a Segment in Data Cloud defining the audience criteria, then set up an Activation targeting Marketing Cloud as the destination — Data Cloud automatically syncs matching Unified Profiles to MC as a Contact/Subscriber list.
๐Ÿ“‹ Step-by-Step
  • 1
    Data Cloud → Segments → Create Segment on Unified Individual (e.g. "High Value Customers India")
  • 2
    Add filters: Total Order Value > ₹5L, Country = India, Last Purchase < 90 days
  • 3
    Publish Segment → verify member count looks correct
  • 4
    Data Cloud → Activations → Create Activation → select Marketing Cloud as target
  • 5
    Map Unified Individual fields to MC Contact attributes
  • 6
    Set refresh schedule (real-time or batch) → Activate → segment syncs to MC automatically
Say This in Interview
"I create the Segment in Data Cloud with audience criteria, publish it, then set up an Activation pointing to Marketing Cloud — mapping Unified Individual fields to MC Contact attributes. On the next refresh, Data Cloud automatically syncs matching profiles to MC as a ready-to-target subscriber list."
Q82How do you ensure Data Cloud respects GDPR? Advanced
๐Ÿ”’Configure Data Spaces to isolate data by region/purpose, use Consent API to enforce opt-outs before activation, enable data deletion propagation, and choose the appropriate Hyperforce region for data residency compliance.
✅ GDPR Compliance in Data Cloud
  • Data Spaces: Isolate EU customer data in a separate Data Space with restricted access
  • Consent Management: Use Data Cloud Consent API — opt-out records are excluded from segments automatically
  • Right to Erasure: Data Cloud propagates deletions — delete the Unified Profile, source data is marked for deletion
  • Data Residency: Choose EU Hyperforce region — data never leaves EU boundaries
  • Retention Policies: Configure data retention periods per object type
Say This in Interview
"GDPR in Data Cloud requires: Data Spaces for isolation, Consent API so opt-outs auto-exclude from segments, deletion propagation for right-to-erasure requests, EU Hyperforce region for data residency, and retention policies per data type — all configurable without code."
Q83How do you map a custom Salesforce CRM object to Data Cloud? Medium
๐Ÿ”—Use the Salesforce CRM Connector in Data Cloud, ingest the custom object as a Data Lake Object, then create a Data Mapping to map its fields to the appropriate Data Model Object (DMO) in the canonical model.
๐Ÿ“‹ Mapping Steps
  • 1
    Data Cloud → Data Sources → Salesforce CRM Connector → Add Object → select your custom object
  • 2
    Configure which fields to ingest (don't ingest all — only what's needed)
  • 3
    The object appears as a DLO (Data Lake Object) — raw ingested data
  • 4
    Data Cloud → Data Streams → create Data Mapping from DLO to relevant DMO (e.g. Sales Order DMO)
  • 5
    Map each DLO field to the correct DMO field — set the Individual ID link for profile association
  • 6
    Run Identity Resolution to link the mapped DMO records to Unified Profiles
Say This in Interview
"I use the Salesforce CRM Connector to ingest the custom object as a DLO, create a Data Mapping from DLO to the appropriate DMO, link it to the Individual ID for profile association, then re-run Identity Resolution to associate the new data with existing Unified Profiles."
Q84Agentforce is giving wrong AI responses — is Data Cloud the issue? How do you diagnose? Advanced
๐Ÿ”ฌWrong Agentforce responses can come from bad grounding data (Data Cloud), bad prompt instructions (Prompt Builder), or wrong Topic configuration. Diagnose each layer separately to isolate the root cause.
๐Ÿ” Diagnostic Approach
  • 1️⃣Test grounding data first: Query the Unified Profile directly — does the Data Cloud profile have the correct information?
  • 2️⃣If profile is wrong → Data Cloud issue: Check Identity Resolution, data mapping, and source data quality
  • 3️⃣If profile is correct → Agentforce issue: Review Prompt Builder instructions — is the AI misinterpreting good data?
  • 4️⃣Check Topic scope: Is the agent answering questions it shouldn't be?
  • 5️⃣Check grounding source: Is Agentforce grounded on the right Data Cloud dataset or a stale one?
Say This in Interview
"I isolate each layer — first query the Unified Profile to confirm Data Cloud has correct data. If the profile is wrong, it's a Data Cloud issue (mapping, IR, source quality). If the profile is correct, the issue is Agentforce's prompt instructions or Topic configuration misinterpreting good data."
Q85Client says Data Cloud is too expensive — what's your ROI argument? Medium
๐Ÿ’ฐPosition Data Cloud ROI around three dimensions: revenue increase from personalisation, cost reduction from operational efficiency, and risk reduction from unified compliance. Always quantify with their existing numbers.
๐Ÿ’ฐ ROI Framework
DimensionMetricTypical Impact
Revenue IncreasePersonalised campaign lift15-30% better conversion
Cost ReductionAgent handle time with full context25-40% faster resolution
Data Cost SavingsEliminate duplicate records across systems20% data storage reduction
Risk ReductionGDPR compliance and breach preventionQuantify breach cost avoided
Say This in Interview
"The ROI argument for Data Cloud is: personalisation lift increases revenue by 15-30%, unified context reduces agent handle time by 25-40%, and unified compliance reduces GDPR risk — I'd quantify each using the client's own revenue and cost numbers to make it tangible, not theoretical."
๐Ÿ”
SOQL & Performance
Query optimisation and large data volume scenarios — asked in every technical round
Q86–Q93
Q86Your SOQL query is timing out on large data volumes — how do you fix it? Advanced
⏱️SOQL timeout (120 seconds limit) on large data means the query is doing too much work — no index, too many filters, or returning too many fields. Optimise the query and the data model.
✅ Optimisation Strategies
  • 1️⃣Selective queries: Always filter on indexed fields (Id, Name, External ID, custom indexed fields) — non-selective queries scan entire table
  • 2️⃣Custom Index: Contact Salesforce Support to add a custom index on frequently-queried non-standard fields
  • 3️⃣Reduce fields returned: SELECT only required fields — avoid implicit heavy fields
  • 4️⃣Skinny Tables: For very high-volume objects, Salesforce can create a skinny table for faster queries (contact Salesforce Support)
  • 5️⃣LIMIT and pagination: Use LIMIT + OFFSET or cursor-based pagination instead of returning all records at once
Say This in Interview
"SOQL timeouts mean a non-selective query scanning the full table — I'd filter on indexed fields first, request a custom index for frequently-queried fields via Salesforce Support, reduce returned fields, and implement cursor-based pagination instead of retrieving all records at once."
Q87When do you use SOSL instead of SOQL — and can you give a real example? Easy
๐Ÿ”Use SOSL when you need to search text across multiple objects simultaneously, especially when you don't know which object contains the data. Use SOQL when you know the exact object and need field-level precision.
ScenarioUse SOQLUse SOSL
Find all Orders for Account X✅ Yes❌ No
Search "Acme" across Accounts, Contacts, Leads❌ No (3 queries)✅ Yes (1 query)
Get Order total > 10000✅ Yes (numeric filter)❌ Text only
Global search bar results❌ Too complex✅ Yes
// SOSL — search "Acme" across 3 objects in one query FIND 'Acme*' IN ALL FIELDS RETURNING Account(Id, Name), Contact(Id, Name, Email), Lead(Id, Name)
Say This in Interview
"I use SOSL when the search text could be in multiple objects and I don't know which one — like a global search feature. It returns results from all specified objects in one query. SOQL is for precise, field-level retrieval from a known object structure."
Q88How do you write an efficient parent-to-child relationship query? Medium
๐Ÿ”—Use a nested SOQL subquery in the SELECT clause — query the parent object and include a subquery on the child relationship name in parentheses. The result is a nested list of child records on each parent.
// Parent-to-Child: Get Accounts with their Orders List<Account> accounts = [ SELECT Id, Name, (SELECT Id, Order_Number__c, Amount__c FROM Orders__r // child relationship name (with __r) WHERE Status__c = 'Active' ORDER BY Amount__c DESC LIMIT 10) FROM Account WHERE Industry = 'Manufacturing' ]; // Access child records for(Account a : accounts) { for(Order__c o : a.Orders__r) { // nested list System.debug(o.Amount__c); } }
Say This in Interview
"Parent-to-child queries use a nested subquery in parentheses referencing the child relationship name (with __r for custom objects) — this returns the parent records with their children pre-loaded in a nested list, avoiding the N+1 query problem entirely."
Q89Your report is showing different data to different users — SOQL or sharing issue? Medium
๐Ÿ”Reports automatically respect the running user's sharing model — if users see different records in the same report, it's a sharing/visibility issue, not a SOQL issue. The report itself is correct.
๐Ÿ” Diagnosis
  • This is expected behaviour: Reports only show records the running user can see — by design
  • If all users should see all records: Change report type to "All Records" (requires "View All Data" permission) or adjust sharing
  • Use "Run As" in Reports: Dashboards can be set to run as a specific user — everyone sees the same data regardless of their own sharing
  • ⚠️Don't change OWD just to fix a report — use Dashboard running user feature instead
Say This in Interview
"Different data in the same report for different users is expected — reports respect the running user's sharing model. If everyone should see the same data, I'd set the Dashboard to run as a specific trusted user, not loosen sharing org-wide for a reporting use case."
Q90How do you use aggregate SOQL for a management dashboard showing revenue by region? Medium
๐Ÿ“ŠUse GROUP BY with SUM() aggregate function in SOQL to calculate totals by region in one query — store results in AggregateResult[] and map them to a data structure for LWC display.
// Revenue by region — one SOQL query List<AggregateResult> results = [ SELECT Region__c, SUM(Amount__c) totalRevenue, COUNT(Id) orderCount, AVG(Amount__c) avgOrderValue FROM Order__c WHERE Status__c = 'Delivered' AND CALENDAR_YEAR(CreatedDate) = 2026 GROUP BY Region__c HAVING SUM(Amount__c) > 0 ORDER BY SUM(Amount__c) DESC ]; for(AggregateResult ar : results) { String region = (String) ar.get('Region__c'); Decimal revenue = (Decimal) ar.get('totalRevenue'); }
Say This in Interview
"I use GROUP BY with SUM(), COUNT(), and AVG() aggregate functions — one SOQL query returns all regions with their totals. Results come back as AggregateResult[] and I use .get('aliasName') to extract values. HAVING filters groups after aggregation, not individual records."
Q91How do you handle a SOQL query that needs to return more than 50,000 records? Advanced
๐Ÿ“ฆSOQL in a single transaction is limited to 50,000 records. For larger datasets, use Database.QueryLocator in Batch Apex which can handle up to 50 million records, or implement cursor-based pagination.
// Batch Apex — handles millions of records global class LargeDataBatch implements Database.Batchable<sObject> { global Database.QueryLocator start(Database.BatchableContext bc) { // QueryLocator streams — no 50k limit! return Database.getQueryLocator( 'SELECT Id, Amount__c FROM Order__c WHERE CreatedDate = THIS_YEAR' ); } global void execute(Database.BatchableContext bc, List<Order__c> scope) { // Process 200 at a time } }
Say This in Interview
"For more than 50,000 records I use Database.getQueryLocator in Batch Apex — it streams records from the database without the 50k limit, processing them in configurable chunks of up to 2,000 each with fresh governor limits per batch."
Q92What makes a SOQL query selective and why does it matter? Advanced
๐ŸŽฏA selective SOQL query filters on indexed fields — returning less than 10% of total records for standard objects, or less than 3% for large objects (1M+ records). Non-selective queries cause full table scans, timeouts, and errors.
๐Ÿ“Š Selectivity Rules
Object SizeSelective ThresholdMax Records Returned
Standard (under 1M)< 10% of total recordsUnder 100k
Large (over 1M)< 3% of total recordsUnder 30k
✅ Indexed Fields (auto-indexed in Salesforce)
  • Id, Name, CreatedDate, LastModifiedDate (all objects)
  • Lookup/Master-Detail fields (parent ID fields)
  • Fields marked as External ID or Unique
  • Custom fields (unless custom indexed via Salesforce Support)
Say This in Interview
"A selective query filters on indexed fields returning under 10% of records — this allows Salesforce to use the index for fast lookup instead of scanning every row. Non-selective queries on large objects cause timeouts and EXCEEDED_MAX_SIZE errors that can't be caught in Apex."
Q93How do you query across multiple custom objects efficiently for a complex report? Medium
๐Ÿ”—Use relationship queries (parent-to-child or child-to-parent) to traverse objects in one SOQL query. For unrelated objects, run separate queries and join in Apex using Maps — never use multiple SOQL queries where one relationship query would work.
// One query across Account → Order → Order Line Items List<Account> accounts = [ SELECT Id, Name, (SELECT Id, Amount__c, (SELECT Id, Product__r.Name, Quantity__c FROM Order_Line_Items__r) FROM Orders__r WHERE Status__c = 'Active') FROM Account WHERE Id IN :accountIds ];
Say This in Interview
"I use nested relationship queries to traverse up to 5 levels in one SOQL statement — Account → Orders → Line Items in a single query. For truly unrelated objects, I run separate queries and join in Apex using Maps keyed on the shared ID, keeping total SOQL count low."
๐Ÿ”ง
Admin Troubleshooting
Real-world scenarios every Salesforce Admin and Developer faces in Production
Q94–Q100
Q94User is locked out of Salesforce — what are your steps? Easy
๐Ÿ”“Unlock the user from Setup → Users → Unlock immediately. Check the cause — wrong password attempts or IP restriction violation — and address the root cause to prevent recurrence.
๐Ÿ” Steps
  • 1
    Setup → Users → find the user → click "Unlock" link next to their name — immediate access restored
  • 2
    Investigate cause: Setup → Login History → check failed login attempts for that user
  • 3
    If IP restriction: check user's Profile for login IP ranges — add their current IP if legitimate
  • 4
    Reset password if needed: Setup → Users → Reset Password → user gets email
  • 5
    If MFA issue: Setup → Users → Disconnect TOTPAlgorithm → user re-registers authenticator app
Say This in Interview
"First click Unlock in Setup → Users to restore immediate access, then check Login History to understand why — too many failed attempts means a forgotten password, IP restriction means they're logging in from an unrecognised network. Fix the root cause so it doesn't happen again."
Q95A Validation Rule is blocking all record saves after a deployment — how do you fix fast? Medium
๐ŸšจThis is a Production emergency — act fast. Deactivate the Validation Rule immediately to unblock users, investigate the cause, fix in sandbox, deploy the corrected rule.
๐Ÿšจ Emergency Response
  • 1
    IMMEDIATELY: Setup → Object Manager → find the object → Validation Rules → find the new rule → DEACTIVATE
  • 2
    Communicate to affected users that the issue is resolved — they can save records now
  • 3
    In sandbox, reproduce the failing save — understand what condition the rule checks that's always true
  • 4
    Fix the rule formula (usually a missing null check or wrong field reference)
  • 5
    Test in sandbox with various record scenarios before re-deploying
  • 6
    Deploy corrected rule, activate in Production, monitor for 30 mins
Say This in Interview
"Deactivate the broken Validation Rule immediately to unblock all users — this takes 30 seconds. Then diagnose in sandbox — usually a missing null check or an always-true formula condition. Fix, test edge cases, redeploy, and activate. User impact under 5 minutes if you act fast."
Q96Client wants to mass-update 10,000 records without code — how? Easy
๐Ÿ“คUse Data Loader (free, installed tool) or the built-in Data Import Wizard for standard objects. Export the records, update the field in Excel/CSV, then run an update operation using the Salesforce record IDs.
๐Ÿ“‹ Data Loader Steps
  • 1
    Data Loader → Export → write SOQL to get the 10,000 records with their IDs → export as CSV
  • 2
    Open CSV in Excel → update the target field column → save
  • 3
    Data Loader → Update → select the same object → upload the modified CSV
  • 4
    Map the ID column to Id field, map changed field to target field
  • 5
    Run update → check success/error files → address any failures
Say This in Interview
"Export the records with IDs using Data Loader, update the field in Excel, then run an Update operation in Data Loader using the ID column as the match key — 10,000 records process in under 2 minutes. Always do a small test batch of 10 records first to validate the mapping."
Q97After a Salesforce seasonal release, a feature stopped working — how do you approach this? Medium
๐Ÿ”„Salesforce releases can deprecate features, change API behaviour, or alter default settings. Check Salesforce's Release Notes for the specific release, identify what changed, and adapt your configuration or code accordingly.
๐Ÿ” Investigation Steps
  • 1️⃣Read Salesforce Release Notes for the current release — search for the feature name or affected area
  • 2️⃣Check Salesforce Known Issues (status.salesforce.com) — may be a bug reported by others
  • 3️⃣Test in sandbox on the new release — compare behaviour before/after
  • 4️⃣Check if Critical Updates were auto-activated — some updates change default behaviour
  • 5️⃣Contact Salesforce Support with a reproducible case if it's a confirmed bug
Say This in Interview
"First check the Release Notes for the current release and the Salesforce Known Issues site — most post-release breaks are documented. I also check Critical Updates that may have been auto-activated during the release window, as these can silently change org behaviour."
Q98A Flow email alert is sending to wrong users — how do you debug? Medium
๐Ÿ“งWrong recipients in Flow emails usually means the Flow is using a hardcoded email, the wrong field for recipient lookup, or the Email Alert itself was configured with wrong recipients. Check each layer.
๐Ÿ” Debug Steps
  • 1
    Open the Flow → find the Send Email or Email Alert element → check recipient configuration
  • 2
    If using Email Alert: Setup → Email Alerts → find the alert → check To Recipients list
  • 3
    Check if Flow has a hardcoded email string — should use a field reference like {!Record.Owner:User.Email}
  • 4
    Run Flow Debug with test record → step through email element → see what recipient is resolved to
  • 5
    Check if the record's Owner field is correct for the triggering record
Say This in Interview
"I use Flow's Debug mode to step through the email element and see what recipient value is actually resolved — wrong recipients usually mean a hardcoded email string, a wrong field reference, or the Email Alert object itself listing incorrect recipients."
Q99Client's Salesforce dashboard is loading very slowly — what do you check? Medium
๐Ÿ“ŠDashboard slowness is usually caused by reports with large data volumes, non-selective SOQL in underlying reports, too many components on one dashboard, or running user having access to millions of records.
✅ Performance Checklist
  • 1️⃣Reduce report complexity: Apply date filters to limit data volume (This Year, This Quarter, not All Time)
  • 2️⃣Fewer components: One dashboard page should have max 10-12 components
  • 3️⃣Schedule refresh: Set dashboard to auto-refresh nightly — users see cached data instantly
  • 4️⃣Running user: If dashboard runs as System Admin (all records), consider a running user with filtered data
  • 5️⃣Split dashboards: One dashboard per area (Sales, Marketing, Support) instead of one giant dashboard
Say This in Interview
"Dashboard slowness almost always means underlying reports returning too many records — I add date filters to reports, reduce dashboard component count, schedule overnight refresh so users see cached data instantly, and consider splitting into smaller focused dashboards."
Q100You've just joined a new Salesforce org — how do you assess its health? Advanced
๐ŸฅA systematic org health assessment covers code quality, technical debt, security posture, automation complexity, data quality, and deployment practices — giving you a prioritised roadmap for improvement.
✅ Org Health Audit Checklist
  • 1️⃣Code Coverage: Run all tests → overall coverage above 75%? Individual classes above 80%?
  • 2️⃣Automation audit: Count of active Flows, Process Builders (deprecated!), Workflow Rules, Triggers — look for conflicts
  • 3️⃣Security review: OWD settings, Profile permissions, Permission Sets, field-level security audit
  • 4️⃣Data quality: Duplicate records count, missing required fields, orphaned records
  • 5️⃣Technical debt: Dead code (unused Apex classes), inactive flows that aren't deleted, deprecated API versions
  • 6️⃣Deployment practice: Are they using Change Sets? SFDX? Any CI/CD? Is there a sandbox strategy?
  • 7️⃣Governor limit headroom: Check API usage, data storage, file storage against limits
๐Ÿญ XYZ Company Audit

I conducted a full Salesforce CRM audit at XYZ Company — found 12 inactive Process Builders that needed migrating to Flow, 3 Apex classes with 0% test coverage pulling down org average, and 2 triggers on the same object without a handler framework. The audit report went to Alpesh Gandhi and drove a 3-month technical debt reduction sprint.

Say This in Interview
"I audit in 7 areas: code coverage, automation complexity (especially deprecated Process Builders), security model, data quality, technical debt (dead code, inactive config), deployment maturity, and governor limit headroom — producing a prioritised list that separates urgent risks from long-term improvements."