Top 120 Salesforce Interview Questions Asked at Accenture 2026 | Apex, LWC, SOQL & Integration
🏢 Company-Specific Interview Prep 2026
Salesforce Interview Questions
Asked at Accenture 2026
Asked at Accenture 2026
120 real Salesforce interview questions asked at Accenture — Apex, LWC, SOQL, Integration, Security, Flows & Architecture. Built from actual candidate experiences on Glassdoor & LinkedIn. 100% Free.
120
Questions
9
Topics
3
Rounds Covered
100%
Free
🔗 Related Interview Prep
🏢 Accenture Salesforce Interview Facts
⏱️
Process Duration
19-28 days avg
😊
Positive Experience
61-75% positive
⭐
Difficulty
Medium (3/5)
🎯
Focus
Practical + Scenario
📋 Accenture Interview Process (3 Rounds)
R1
Online MCQ Test (30-45 mins)
Salesforce fundamentals, Apex basics, admin concepts, governor limits. Multiple choice — speed matters.
R2
Technical Interview (1-1.5 hrs)
Deep dive: Apex, LWC, SOQL, Integration. Live coding exercises. Architecture-level questions even for developer roles. Scenario-based.
R3
Managerial / HR Round (30 mins)
Behavioral questions, project experience, consulting mindset, trade-off decisions, salary discussion.
💡 Accenture-Specific Tips
🎯
Architecture mindset is key — Always explain WHY you chose an approach, not just WHAT it is. Trade-offs matter.
💻
Practice live coding — Write triggers and Apex by hand. Live coding exercises are common at Accenture.
🔒
Master security model end-to-end — OWD, profiles, roles, permission sets, sharing rules. Comes up in every round.
🔗
Know integration patterns — REST vs SOAP, Named Credentials, callouts, platform events. Accenture is integration-heavy.
📚
Read latest Salesforce release notes — Interviewers notice when you know recent platform changes.
📋 Topics Covered — 120 Questions
☁️ Section 1 — Salesforce Fundamentals
Q1–Q15 · Basic → Intermediate · Asked in Round 1 MCQ + Round 2
Q1
What is Salesforce multi-tenant architecture and why does it matter?
✅ Direct Answer
Salesforce is a cloud CRM where multiple customers (tenants) share the same infrastructure — same servers, same database — while their data remains completely isolated. Governor limits exist specifically because of multi-tenancy — one org's runaway code cannot consume resources meant for other tenants.
💡 Why?
Multi-tenancy enables cost-effective SaaS — one shared infrastructure serves thousands of orgs. It's why Salesforce delivers automatic upgrades (3 per year) without downtime. Governor limits are the price of multi-tenancy — protecting all tenants from poorly written code in any one org.
🌍 Real World Example
Think of a high-rise apartment — all residents share plumbing and electrical infrastructure, but each apartment is private. At XYZ Company, our Salesforce org runs on the same Salesforce servers as thousands of other companies — but our data is completely invisible to them. This shared model is why our per-user cost is a fraction of on-premise CRM.
🔑 Key Points for Interviewer
- Multi-tenant = shared infrastructure, isolated data
- Governor limits protect all tenants from resource monopolization
- 3 automatic releases per year — Spring, Summer, Winter
- Salesforce Trust (trust.salesforce.com) for real-time status
- Benefits: lower cost, automatic upgrades, faster deployment
🎤 One-Line Answer
"Salesforce multi-tenant architecture means all customers share one infrastructure with isolated data — governor limits exist to prevent any single org from monopolizing shared resources."
Q2
What is the Salesforce order of execution? Why is it critical to understand?
✅ Direct Answer
On record save: 1) System Validation 2) Before-save Flows 3) Before Triggers 4) Custom Validation Rules 5) Duplicate Rules 6) After Triggers 7) Assignment/Auto-response/Workflow Rules 8) After-save Flows 9) Roll-up Summary recalculation 10) Criteria-based Sharing 11) Commit to database.
💡 Why?
Order of execution explains unexpected behavior — a before trigger sets a field value, then validation rules fire and see that value, then after triggers fire with the committed record. Knowing the order lets you place logic in the right place and debug conflicts between automations. Accenture asks this in almost every technical round.
🌍 Real World Example
Bug: Validation rule failing even though trigger should have set the required field. Investigation: a Before-save Flow was overwriting the trigger's value to blank (Flow runs BEFORE triggers now). Fix: removed the conflicting Flow assignment. Without knowing order of execution, this bug would take hours to find.
🔑 Key Points for Interviewer
- Before-save Flow runs BEFORE before triggers (important new behavior)
- Validation rules fire AFTER before triggers — triggers can satisfy validation
- After triggers fire before workflow rules
- Roll-up summary recalculation happens post-commit
- Most common debugging tool for automation conflicts
🎤 One-Line Answer
"Order: System Validation → Before-save Flow → Before Trigger → Custom Validation → After Trigger → Workflow → After-save Flow → Commit — before-save flows now run BEFORE before triggers."
Q3
What are governor limits? Name the most important ones and how you handle them.
✅ Direct Answer
Governor limits cap resource usage per transaction in Salesforce's multi-tenant environment. Most critical: 100 SOQL queries per sync transaction, 150 DML statements, 10,000 DML rows, 6 MB heap size, 10,000 ms CPU time. Handle via bulkification — process collections, never SOQL/DML inside loops.
💡 Why?
Without limits, one org's infinite loop crashes shared servers affecting thousands of other orgs. Limits enforce good coding practices (bulkification) that happen to also be more performant. Accenture interviews specifically ask about limits + solutions because poor limit management is the #1 production bug source.
🌍 Real World Example
Developer wrote SOQL inside a for loop — 200 records × 1 SOQL = 200 queries, hitting the 100 limit. Error: "Too many SOQL queries: 101." Fix: moved SOQL outside loop, used Map<Id, SObject> to lookup records. This is the most common interview coding exercise at Accenture.
🔑 Key Points for Interviewer
- SOQL: 100 sync / 200 async
- DML: 150 statements / 10,000 rows
- Heap: 6 MB sync / 12 MB async
- CPU: 10,000 ms sync / 60,000 ms async
- Solution: bulkify — always work with collections, never SOQL/DML in loops
🎤 One-Line Answer
"Governor limits protect the multi-tenant platform — 100 SOQL, 150 DML, 6 MB heap per transaction — solved by bulkification: move all queries and DML outside loops, process collections."
Q4
What is the difference between master-detail and lookup relationships?
✅ Direct Answer
Master-Detail: tight coupling — child cannot exist without parent, child inherits parent's sharing, supports roll-up summary fields, cascade delete. Lookup: loose coupling — child can exist without parent, independent ownership and sharing, no roll-up summary, no cascade delete by default.
💡 Why?
Relationship type determines data integrity and reporting capabilities. Choosing wrong type is expensive to fix — converting Master-Detail to Lookup loses roll-up summaries; converting Lookup to Master-Detail requires no null values in the field. Accenture data architects make this decision first in every project.
🌍 Real World Example
Order_Line_Item__c to Order__c: Master-Detail (line item has no meaning without order — delete order, delete all lines, roll-up total). Contact to Account: Lookup (a contact can exist without an account — a personal contact has no employer). Wrong choice: A pharma project used Lookup for Visit Report to Account — couldn't aggregate visit counts without extra code.
🔑 Key Points for Interviewer
- Master-Detail: roll-up summary, cascade delete, inherited sharing, required parent
- Lookup: no roll-up, optional parent, independent sharing
- Junction Object: two Master-Details = many-to-many
- Max 2 Master-Detail relationships per object
- Can convert Lookup → Master-Detail only if no null values exist
🎤 One-Line Answer
"Master-Detail: tight coupling, roll-up summary, cascade delete, required parent. Lookup: loose coupling, optional parent, independent sharing — choose based on whether the child can exist without a parent."
Q5
What is a junction object and when do you use it?
✅ Direct Answer
A junction object is a custom object with two Master-Detail relationships used to create a many-to-many relationship between two objects. Example: One Doctor can treat many Patients; one Patient can have many Doctors — Treatment__c is the junction with Master-Detail to both Doctor__c and Patient__c.
💡 Why?
Salesforce doesn't support native many-to-many relationships on SObjects. Junction objects bridge two objects and can store additional attributes about the relationship (e.g., enrollment date, relationship type). This is a fundamental data modeling concept Accenture tests at all experience levels.
🌍 Real World Example
Healthcare project: Doctor__c (many) ↔ Patient__c (many). Created Treatment__c junction: Master-Detail to Doctor__c + Master-Detail to Patient__c. Added fields: Treatment_Date__c, Diagnosis__c, Billing_Code__c. Reports on Treatment__c showed full doctor-patient relationship data with treatment details.
🔑 Key Points for Interviewer
- Exactly 2 Master-Detail relationships required
- Inherits sharing from both parents (most restrictive applies)
- Can store attributes about the relationship itself
- Uses both Master-Detail slots — no room for a 3rd
- Reports can be built on junction objects for full many-to-many visibility
🎤 One-Line Answer
"Junction object has two Master-Detail relationships to create many-to-many between two objects — it also stores attributes about the relationship like dates, status, or billing information."
Q6
What is data skew and how do you avoid it?
✅ Direct Answer
Data skew occurs when one parent record has 10,000+ child records, causing lock contention during concurrent DML operations — multiple processes trying to update children of the same parent simultaneously results in "UNABLE_TO_LOCK_ROW" errors. Avoid by distributing records across account hierarchies and multiple owners.
💡 Why?
When many users simultaneously update child records of the same parent, Salesforce locks the parent to update roll-up summaries — creating a bottleneck. Enterprise clients with millions of transactions hit this without proper data model design. Accenture asks this because it's a critical large-data-volume concern on every enterprise project.
🌍 Real World Example
A retail client had one "Global Account" owning 500,000 Opportunities. During batch processing, 50 concurrent jobs all tried to update Opportunities under the same Account — lock errors cascaded. Solution: created a Regional Account hierarchy (Global → Regional → Country) distributing load below 10,000 per account.
🔑 Key Points for Interviewer
- Account skew: >10,000 children per account
- Ownership skew: >10,000 records owned by one user/queue
- Solution: account hierarchies, distributed ownership
- Defer sharing recalculation in Setup during bulk operations
- Design data model for >1M records from the start, not after
🎤 One-Line Answer
"Data skew (>10,000 children per parent) causes lock contention — prevent with account hierarchies and distributed ownership so no single parent becomes a concurrent DML bottleneck."
Q7
What is a Connected App in Salesforce?
✅ Direct Answer
A Connected App registers an external application in Salesforce for OAuth 2.0 authentication — enabling secure API access without sharing usernames and passwords. It defines the external app's identity (consumer key/secret), OAuth scopes (what it can access), and access policies.
💡 Why?
Instead of sharing Salesforce credentials with every integration, each connected app gets its own OAuth credentials. If one integration is compromised, you revoke just that connected app's token — other integrations are unaffected. Accenture uses connected apps for every client integration they build.
🌍 Real World Example
Our ERP integration uses a Connected App with Client Credentials OAuth flow. ERP sends consumer key + secret → Salesforce validates → returns access token → ERP uses token in all API calls. If ERP system is compromised, we revoke the connected app token in Salesforce instantly — no password changes needed anywhere.
🔑 Key Points for Interviewer
- Consumer Key + Secret = the app's OAuth credentials
- OAuth flows: Web Server, JWT Bearer, Client Credentials, Username-Password (avoid)
- Scopes define what the app can access (api, refresh_token, full)
- Named Credentials use Connected Apps under the hood
- IP restrictions and allowed users configurable per Connected App
🎤 One-Line Answer
"A Connected App registers an external application for OAuth-based API access — more secure than credentials, scoped to specific permissions, and instantly revocable per integration."
Q8
What is a managed package vs unmanaged package?
✅ Direct Answer
Managed Package: AppExchange product — code is protected/hidden (black box), upgradeable by the ISV, requires namespace prefix, intellectual property protected. Unmanaged Package: code fully visible and editable after installation, not upgradeable by publisher, used for sharing templates or open-source components.
🌍 Real World Example
DocuSign for Salesforce = managed package — you see it works but can't read the Apex code. DLRS (Declarative Lookup Rollup Summaries) = unmanaged — install and modify code freely. Accenture builds managed packages for products deployed to multiple clients and unmanaged packages for reusable starter templates within their practice.
🔑 Key Points for Interviewer
- Managed: protected code, upgradeable, namespace prefix (e.g., dlrs__)
- Unmanaged: editable, not upgradeable, no namespace
- 2GP (Second Generation Packaging): newer, source-driven, scratch org based
- Managed packages can't be fully deleted — plan carefully before publishing
- Accenture builds 2GP managed packages for ISV client products
🎤 One-Line Answer
"Managed packages protect code and allow ISV upgrades (AppExchange products). Unmanaged packages are editable snapshots — used for templates and open-source Salesforce components."
Q9
What are the different sandbox types in Salesforce?
✅ Direct Answer
Developer (5 MB, refresh daily — individual dev). Developer Pro (1 GB, refresh daily). Partial Copy (5 GB with sample of production data, refresh 5 days — QA/UAT). Full (complete production copy including all data, refresh 29 days — pre-production testing). Scratch Org (SFDX — temporary, disposable, source-driven).
🌍 Real World Example
Accenture project sandbox strategy: Developer Sandbox (12 individual devs), Integration Sandbox (daily builds), QA Partial Copy (realistic data volume), UAT Full Sandbox (exact production copy for user acceptance), Pre-Prod Full Sandbox (final check before go-live). Changes flow through each stage via SFDX + GitHub Actions CI/CD pipeline.
🔑 Key Points for Interviewer
- Developer: free, 5 MB, individual development, refresh anytime
- Partial Copy: includes production data sample — realistic testing
- Full: exact production copy — most expensive, slowest refresh
- Scratch Orgs: SFDX, temporary (up to 30 days), disposable
- Accenture uses Scratch Orgs for feature development, Full for UAT
🎤 One-Line Answer
"Four sandbox types: Developer (5 MB), Developer Pro (1 GB), Partial Copy (5 GB sample data), Full (complete production copy) — each for different stages of the development and testing lifecycle."
Q10
What is Salesforce DX and how does it improve development?
✅ Direct Answer
Salesforce DX is source-driven development using Scratch Orgs (temporary dev environments), Salesforce CLI, Git version control, and CI/CD pipelines. It replaces sandbox + change set deployment with proper DevOps — code lives in Git, not locked in a sandbox.
💡 Why?
Traditional sandbox + change set development doesn't scale: no version control, multiple devs conflict, no automated testing, deployments are manual and error-prone. SFDX enables teams of 20+ developers to work simultaneously with Git branches, automated tests in CI/CD, and reproducible Scratch Orgs per feature.
🌍 Real World Example
Accenture project: 15 developers, 6 features in parallel. Each developer creates their own Scratch Org from CLI in 2 minutes. All code in GitHub with feature branches. PR → GitHub Actions runs 500 Apex tests → code review → merge → auto-deploys to integration sandbox. Zero change set conflicts. Zero "I deployed over your code" incidents.
🔑 Key Points for Interviewer
- Scratch Orgs: temporary (up to 30 days), source-driven, disposable
- sfdx force:source:push/pull to sync between org and local
- Second Generation Packages (2GP) for modular deployments
- Copado, GitHub Actions, Azure DevOps for CI/CD pipelines
- Accenture mandates SFDX for all enterprise engagements
🎤 One-Line Answer
"Salesforce DX replaces sandbox + change set with source-driven Git workflow, Scratch Orgs, and CI/CD — enabling proper team development at enterprise scale with full version control."
Q11
What is the difference between a workflow rule and a Flow? Which should you use today?
✅ Direct Answer
Workflow Rules are officially retired. Flows are the modern replacement — they do everything workflow rules did plus much more: multi-object updates, loops, subflows, complex logic, screen interactions. All new automation must be built in Flows; existing Workflow Rules should be migrated.
🌍 Real World Example
Old Workflow Rule: send email when Opportunity Stage = Closed Won. New Record-Triggered Flow: when Stage changes to Closed Won → update related Contacts to "Customer" → create 5 onboarding Tasks → send email → create a renewal Opportunity for 12 months later. One Flow replaces 3 separate workflow rules and does more.
🔑 Key Points for Interviewer
- Workflow Rules: retired as of 2025 — no new creation allowed
- Process Builder: also retiring — migrate to Flow
- Migrate to Flow tool: Setup → Migrate to Flow
- Flow types: Record-Triggered, Screen, Scheduled, Auto-launched
- Accenture: Flows for all new automation, never workflow rules
🎤 One-Line Answer
"Workflow Rules are retired — Flows replace them entirely. Use Record-Triggered Flows for all new automation; migrate existing workflows to Flow using Salesforce's Migrate to Flow tool."
Q12
What is a Custom Metadata Type vs Custom Settings?
✅ Direct Answer
Custom Metadata Types (CMT): deployment-ready config data — records deploy with Change Sets and packages, usable in formulas and validation rules, no SOQL governor limit to read. Custom Settings: org-specific config — not deployable with packages, requires SOQL or Settings API to read, supports hierarchy (org/profile/user level).
🌍 Real World Example
API endpoint URLs: stored in CMT (API_Config__mdt) — deploy once, correct URL in every sandbox and production automatically. If stored in Custom Settings, manually re-configure in every sandbox after each refresh. CMT saves 2 hours of post-deployment manual setup per environment on Accenture projects.
🔑 Key Points for Interviewer
- CMT (__mdt): deployable, usable in formulas, no governor limit reads
- Custom Settings (__c): hierarchy support, not deployable, SOQL to read
- CMT for: API endpoints, thresholds, feature flags, mapping tables
- Custom Settings for: user/profile-specific hierarchy values
- Accenture mandates CMTs for all deployable configuration
🎤 One-Line Answer
"Custom Metadata deploys with code and works in formulas — Custom Settings are org-specific and don't deploy. Use CMTs for any configuration that needs to travel between environments automatically."
Q13
What is a roll-up summary field and what are its limitations?
✅ Direct Answer
Roll-up summary fields aggregate child record data (COUNT, SUM, MIN, MAX) onto a master record in a Master-Detail relationship. Limitations: only available on Master-Detail parent objects (not Lookup relationships), no cross-filter conditions on non-standard fields, can cause performance issues on large-volume orgs during recalculation.
🌍 Real World Example
Total_Order_Value__c on Order__c = SUM(Amount__c) from all Order_Line_Item__c. Updates automatically as line items are added/removed. Limitation hit: needed roll-up on a Lookup relationship (Contact → Account). Solution: used DLRS (Declarative Lookup Rollup Summaries) AppExchange package — extends roll-up capability to Lookup relationships.
🔑 Key Points for Interviewer
- Functions: COUNT, SUM, MIN, MAX with optional filter criteria
- Only on Master-Detail parent — not Lookup
- For Lookup: use DLRS package or Apex trigger
- Max 25 roll-up summary fields per object
- Can slow down saves on high-volume orgs — test with realistic data volumes
🎤 One-Line Answer
"Roll-up summary (COUNT/SUM/MIN/MAX) aggregates child data onto a master record — only available on Master-Detail, not Lookup. Use DLRS or Apex trigger for roll-ups on Lookup relationships."
Q14
What is Platform Event in Salesforce and when do you use it?
✅ Direct Answer
Platform Events implement publish-subscribe messaging — publishers send events via EventBus.publish(); subscribers (Apex triggers on the event, Flows, or external systems via CometD) react independently. Events are decoupled — publisher doesn't know or care who subscribes.
💡 Why?
Platform Events solve tight coupling in integrations. Instead of one trigger directly calling another system synchronously, it publishes an event — any interested system subscribes independently. If a subscriber fails, the publisher isn't affected. Enables real-time LWC notifications, decoupled integration, and reliable async processing.
🌍 Real World Example
Order confirmation: Order trigger publishes Order_Placed__e. Multiple independent subscribers: 1) Apex trigger creates fulfillment records 2) LWC on warehouse page receives real-time notification 3) MuleSoft receives via CometD and triggers ERP. Publisher (order trigger) knows nothing about any of them — fully decoupled.
🔑 Key Points for Interviewer
- Platform Event objects end in __e
- EventBus.publish() to fire; triggers on __e to subscribe in Apex
- High-volume events: better performance, no sharing rules
- ReplayId: replay missed events up to 72 hours
- CDC (Change Data Capture): Salesforce-generated events for record changes
🎤 One-Line Answer
"Platform Events enable publish-subscribe messaging — publisher fires the event, multiple independent subscribers react. Fully decoupled — publisher doesn't know or care who's listening."
Q15
What is the difference between Data Loader and Data Import Wizard?
✅ Direct Answer
Data Import Wizard: browser-based, max 50,000 records, limited to specific standard + custom objects, no delete operation, easy for non-technical users. Data Loader: desktop app (or CLI), millions of records, all objects, all DML operations (Insert/Update/Upsert/Delete/Hard Delete), schedulable via command line.
🌍 Real World Example
Accenture ERP migration project: 3.5M Account records loaded via Data Loader in batches of 200K overnight using Bulk API 2.0. Post-migration, business users updated 150 contact phone numbers via Data Import Wizard — no IT involvement needed. Right tool per volume and user type.
🔑 Key Points for Interviewer
- Data Import Wizard: 50K limit, browser, limited objects, no delete
- Data Loader: unlimited records, desktop/CLI, all objects, all DML operations
- Data Loader uses Bulk API 2.0 for large volumes
- External IDs: required for upsert — match records without knowing Salesforce IDs
- Accenture uses Data Loader with Bulk API for all data migrations
🎤 One-Line Answer
"Data Import Wizard: 50K limit, browser, easy for admins. Data Loader: millions of records, all objects, all DML operations, schedulable — use Loader for enterprise data migrations."
🔒 Section 2 — Security & Sharing Model
Q16–Q30 · Intermediate · Most Asked Topic at Accenture Interviews
Q16
What is the difference between a Profile and a Permission Set?
✅ Direct Answer
Profile: mandatory, exactly one per user, defines baseline permissions (object CRUD, field access, system permissions, login hours, IP restrictions). Permission Set: optional, multiple per user, extends additional permissions on top of the profile. Modern best practice: minimum-access profile + permission sets for all specific access.
💡 Why?
Salesforce is retiring permissions from profiles — all permissions will eventually live only in Permission Sets. This makes user provisioning scalable and auditable. Changing one Permission Set updates all assigned users instantly vs. maintaining dozens of different profiles. Accenture tests this because it shows you understand modern Salesforce security strategy.
🌍 Real World Example
Standard User profile (minimum access). Permission Sets: "Sales_Access" (for all reps), "Dashboard_Admin" (reporting leads), "Integration_User" (API access). When a rep is promoted to manager, add "Manager_Reports" permission set — don't create a new profile. When they leave, remove all permission sets in one step.
🔑 Key Points for Interviewer
- Profile: 1 per user, required, baseline
- Permission Set: multiple per user, extends access
- Permission Set Groups: bundle multiple PSets for a job function
- Salesforce retiring profile permissions — PSets are the future
- Muting Permission Sets: reduce access from a Permission Set Group
🎤 One-Line Answer
"Profile = mandatory baseline (1 per user). Permission Set = optional extension (multiple per user). Best practice: minimum-access profile + permission sets — profiles are being retired in favor of PSets."
Q17
Explain the Salesforce sharing model — OWD, Role Hierarchy, Sharing Rules, Manual Sharing.
✅ Direct Answer
Four layers working together: OWD (most restrictive baseline — Private, Public Read Only, Public Read/Write). Role Hierarchy (managers automatically see their team's records). Sharing Rules (extend access beyond OWD for groups/roles). Manual Sharing (individual record sharing by record owner). Each layer can only OPEN access — never restrict beyond OWD.
🌍 Real World Example
Opportunity OWD = Private. Role Hierarchy: Sales Manager sees their reps' opportunities. Sharing Rule: share all Opportunities in "India Region" with "India Sales Team" public group. Manual Sharing: rep shares one confidential deal with Legal team for contract review. All four layers working together for precise access control.
🔑 Key Points for Interviewer
- OWD = most restrictive starting point — build up from here
- Role Hierarchy: data VISIBILITY (what you SEE) not permissions (what you DO)
- Sharing Rules: criteria-based or owner-based, max 300 per object
- Manual Sharing: granular, one record at a time, doesn't persist on owner change
- Apex Managed Sharing: programmatic sharing that persists through owner changes
🎤 One-Line Answer
"Sharing model: OWD (restrictive baseline) → Role Hierarchy (manager sees team's records) → Sharing Rules (extend to groups/criteria) → Manual Sharing (individual records) — each layer only opens access, never restricts."
Q18
What is the difference between "with sharing" and "without sharing" in Apex?
✅ Direct Answer
with sharing: Apex respects the running user's sharing rules — SOQL returns only records the user can see. without sharing: runs in system context — ignores sharing, sees all records regardless of user access. Default (unspecified) is without sharing — a potential security vulnerability. inherited sharing: inherits calling class's context.
💡 Why?
A class marked without sharing called from a user-facing LWC could expose records the user shouldn't see. Always use with sharing for user-facing code. without sharing is appropriate for system processes (batch jobs, scheduled Apex) that need to process all records regardless of who triggered them.
🌍 Real World Example
LWC Account search controller: "with sharing" — user only sees Accounts they have access to. Nightly cleanup batch: "without sharing" — must process all expired records regardless of owner. Security review finding on an Accenture project: controller class missing "with sharing" was returning data from other regions — 2-day emergency fix.
🔑 Key Points for Interviewer
- with sharing: respects user's record visibility
- without sharing: system context, sees all records
- inherited sharing: inherits calling class's mode — good for utility methods
- Default (unspecified) = without sharing — always be explicit
- FLS NOT enforced by either — handle separately with stripInaccessible()
🎤 One-Line Answer
"with sharing respects user's record visibility. without sharing ignores sharing and sees everything — always use with sharing for user-facing Apex; without sharing only for system-level batch processes."
Q19
What is field-level security (FLS) and how do you enforce it in Apex?
✅ Direct Answer
FLS controls which fields a user can see and edit, configured per profile/permission set. Hidden fields don't appear in UI. However, Apex runs in system context by default — it bypasses FLS unless explicitly enforced using WITH SECURITY_ENFORCED in SOQL, Schema.describeSObjectType checks, or stripInaccessible() method.
🌍 Real World Example
// Enforce FLS in SOQL
List<Account> accs = [SELECT Id, Name, AnnualRevenue FROM Account WITH SECURITY_ENFORCED];
// Throws exception if user doesn't have FLS on any field in SELECT
// stripInaccessible - removes inaccessible fields instead of throwing
SObjectAccessDecision dec = Security.stripInaccessible(AccessType.READABLE, accs);
List<Account> safeAccs = (List<Account>) dec.getRecords();
🔑 Key Points for Interviewer
- FLS set on profiles and permission sets per field per object
- WITH SECURITY_ENFORCED: throws exception if field inaccessible
- stripInaccessible(): removes inaccessible fields gracefully
- Schema.describeSObjectType(): manual FLS check per field
- Accenture security reviews check all Apex for FLS enforcement
🎤 One-Line Answer
"FLS controls field visibility per profile — Apex bypasses it by default. Enforce with WITH SECURITY_ENFORCED in SOQL or stripInaccessible() to remove inaccessible fields from results."
Q20
What is Apex Managed Sharing and when would you use it?
✅ Direct Answer
Apex Managed Sharing programmatically grants record access by inserting Share objects (AccountShare, CustomObject__Share) with a custom RowCause. Unlike manual sharing, Apex Managed Sharing persists when record ownership changes — making it suitable for complex, dynamic sharing scenarios that declarative rules can't handle.
🌍 Real World Example
// Share a Deal with all Meeting participants
List<Deal__Share> shares = new List<Deal__Share>();
for(Id participantId : participantIds) {
Deal__Share share = new Deal__Share();
share.ParentId = dealId;
share.UserOrGroupId = participantId;
share.AccessLevel = 'Read';
share.RowCause = Schema.Deal__Share.RowCause.MeetingParticipant__c; // Custom reason
shares.add(share);
}
insert shares;
🔑 Key Points for Interviewer
- Share objects: AccountShare, ContactShare, CustomObj__Share
- RowCause must be custom — create in object's Sharing Reason settings
- Persists through ownership changes (unlike manual sharing)
- Use when sharing logic depends on related records or dynamic conditions
- Delete shares when the condition no longer applies
🎤 One-Line Answer
"Apex Managed Sharing inserts Share objects with custom RowCause — persists through ownership changes, used for complex sharing logic that declarative rules can't express."
Q21
What is the difference between object-level access and record-level access?
✅ Direct Answer
Object-level access (Profile/Permission Set): controls whether a user can perform CRUD on an object at all. Record-level access (OWD, Roles, Sharing Rules): controls which SPECIFIC records of that object the user can see or edit. Both must be satisfied — having object access but no record access means seeing the tab but no records.
🌍 Real World Example
Support Rep: Object = Read + Create on Cases (profile). Record = only cases in their region (sharing rule). Support Manager: Object = Read + Create + Edit + Delete on Cases. Record = all cases for their team (role hierarchy). Without object access, record access is irrelevant. Without record access, object access shows an empty list.
🔑 Key Points for Interviewer
- Object-level: CRUD + FLS — set on Profile/Permission Set
- Record-level: OWD → Role Hierarchy → Sharing Rules → Manual
- Both must be satisfied — AND condition, not OR
- View All / Modify All: bypass record-level for one object
- View All Data / Modify All Data: system-level bypass for all objects
🎤 One-Line Answer
"Object-level = can you use this object at all (CRUD via Profile). Record-level = which specific records can you see (OWD + Roles + Sharing). Both must be satisfied simultaneously."
Q22
What are public groups and queues? How are they different?
✅ Direct Answer
Public Groups: collections of users, roles, or other groups — used in sharing rules, list view visibility, and email distribution. Queues: special groups that own records — for unassigned records waiting for team members to pick up (Cases, Leads, custom objects). Queues appear as users in OwnerId field.
🌍 Real World Example
Lead Queue: all web form leads → "SDR Queue" → SDRs pick up from queue list view and self-assign. Sharing rule: share all Accounts tagged "Enterprise" with "Enterprise Sales Team" public group. When a new rep joins the team, add to the group — automatically inherits all sharing rules, list view access, and email distribution.
🔑 Key Points for Interviewer
- Public Groups: for sharing, list views, email — not record ownership
- Queues: own records, work with assignment rules, members get email notifications
- Queue members receive email on new records entering the queue
- Queues: SOQL WHERE OwnerId = :queueId to find queue-owned records
- Assignment Rules route Leads/Cases to queues automatically
🎤 One-Line Answer
"Public Groups are user collections for sharing rules and list views. Queues own unassigned records until team members pick them up — queues appear as users in OwnerId and work with assignment rules."
Q23
What is Territory Management and when should you use it over role hierarchy?
✅ Direct Answer
Enterprise Territory Management assigns Accounts (and their related records) to territories based on rules — geography, industry, revenue, or any field. Use it when multiple reps cover the same account (overlay selling), territories are geographic rather than hierarchical, or accounts move between territories without changing ownership.
🌍 Real World Example
Pharma company: each doctor (Account) is covered by Field Sales Rep (primary) + Inside Sales Rep (overlay) + Medical Science Liaison (overlay). Three different people covering one account simultaneously. Territory Management assigns all three to the doctor's territory — role hierarchy can only assign one owner per record.
🔑 Key Points for Interviewer
- Role Hierarchy: rigid one-manager-per-user, one owner per record
- Territory Management: flexible, multi-user coverage, rule-based assignment
- Territory rules auto-assign Accounts when field values match criteria
- Works alongside role hierarchy — not a full replacement
- Accenture uses Territory Management for pharma and financial services clients
🎤 One-Line Answer
"Territory Management handles overlay selling and geographic territories — use it when multiple reps cover the same account or territory assignment doesn't match the org chart hierarchy."
Q24
What is Shield Platform Encryption?
✅ Direct Answer
Shield Platform Encryption is an enterprise-grade, paid add-on that encrypts Salesforce data at rest using AES-256. Encrypts standard and custom fields, files, attachments — more comprehensive than classic encryption. Supports deterministic encryption (allows search on encrypted fields) and Bring Your Own Key (BYOK).
🌍 Real World Example
Accenture healthcare client needed HIPAA compliance — patient SSNs, dates of birth, medical record numbers encrypted at rest. Shield enabled on those fields. Deterministic encryption allowed searching SSNs (can look up "123-45-6789"). Classic encryption would have blocked search entirely. BYOK meant client controlled their own encryption keys — Salesforce couldn't decrypt their data.
🔑 Key Points for Interviewer
- Classic Encryption: free, custom fields only, no search, AES-128
- Shield: paid, standard + custom, AES-256, searchable, BYOK
- Shield includes: Platform Encryption + Event Monitoring + Field Audit Trail
- Encryption can impact performance — test before enabling high-volume fields
- Required for HIPAA, FINRA, PCI DSS compliance on Accenture projects
🎤 One-Line Answer
"Shield Platform Encryption is AES-256 encryption at rest with search capability (deterministic) and BYOK — required for HIPAA/FINRA compliance; far more capable than classic AES-128 encryption."
Q25
How do you troubleshoot a user who can't see a record they should have access to?
✅ Direct Answer
Debug systematically: 1) Check OWD — is the record accessible at the object level? 2) Check record owner's role vs user's role — is the hierarchy correct? 3) Check sharing rules — does any rule grant access? 4) Check manual sharing — has anyone shared the specific record? 5) Check Profile CRUD — can the user see the object at all? Use Lightning's "Why can't I see this?" diagnostic tool.
🌍 Real World Example
User reported: "I can't see Account ABC." Systematic debug: OWD = Private ✓, User not in owner's role branch ✓, No sharing rule covering this account ✓, No manual sharing ✓ — root cause: user was recently moved to a different role that's no longer in the sharing path. Fix: updated role or added targeted sharing rule.
🔑 Key Points for Interviewer
- Lightning: Setup → Users → [user] → "Why can't I see this?" tool
- Debug order: OWD → Role → Sharing Rules → Manual → Profile CRUD
- Record type restrictions can also hide records from page layout
- Fields missing but record visible: check FLS, not sharing
- Login As user: reproduce exactly what they see
🎤 One-Line Answer
"Debug in order: OWD → Role Hierarchy → Sharing Rules → Manual Sharing → Profile CRUD — or use Lightning's 'Why can't I see this?' diagnostic tool for the fastest path to root cause."
Q26
What is the difference between "View All" and "Modify All" permissions?
✅ Direct Answer
View All: user can see ALL records of a specific object regardless of OWD, role hierarchy, or sharing rules — complete bypass of sharing for read access. Modify All: user can see AND edit/delete ALL records of that object — complete bypass for read + write access. Both configured per object on profiles or permission sets.
🌍 Real World Example
Compliance Officer needs to audit ALL Opportunities across all regions (OWD = Private, complex territories). Solution: "View All" on Opportunity permission set → sees every opportunity instantly. "Modify All" would also allow editing — too broad. Principle of least privilege: give only what's needed. "View All Data" / "Modify All Data" are org-level admin equivalents for ALL objects.
🔑 Key Points for Interviewer
- View All: read-only bypass of all sharing for one object
- Modify All: read + write + delete bypass for one object
- View All Data / Modify All Data: admin-level, all objects
- Use sparingly — principle of least privilege
- Audit who has View All / Modify All regularly in large orgs
🎤 One-Line Answer
"View All bypasses all sharing for read on one object. Modify All also bypasses for edit/delete. Both override OWD, roles, and sharing rules completely — grant only when truly necessary."
Q27
What are login hours and IP restrictions and how do they improve security?
✅ Direct Answer
Login Hours (Profile-based): restrict which hours users can log in — e.g., 7 AM - 8 PM weekdays only. IP Restrictions (Profile or org-level Network Access): restrict login to specific IP address ranges — only corporate network or VPN IPs. Together they form time-based and location-based access controls.
🌍 Real World Example
Bank client compliance requirement: Login Hours = 7 AM - 9 PM weekdays (no weekend access for non-emergency). IP Restriction = corporate office range + VPN IPs only. A compromised account attempting login from an overseas IP at 3 AM is blocked at two security layers — even with correct credentials. Essential for regulated industries Accenture serves.
🔑 Key Points for Interviewer
- Login Hours: per profile, per day of week, active sessions kicked out
- IP Restrictions: profile-based or org-wide (Setup → Security → Network Access)
- Trusted IP ranges: bypass login verification but still enforce restrictions
- MFA: additional layer — always enable for all users in 2026
- Accenture mandates both for all financial services and healthcare clients
🎤 One-Line Answer
"Login Hours restrict when users can access Salesforce; IP Restrictions restrict which networks they access from — two security layers that prevent unauthorized access even with correct credentials."
Q28
In what scenarios does sharing recalculation happen?
✅ Direct Answer
Sharing recalculation triggers when: OWD settings change, sharing rules are added/modified/deleted, role hierarchy changes, a record's ownership changes, record field values change that affect criteria-based sharing rules, Apex Managed Sharing insert/delete operations occur, or manual "Recalculate" is run from Setup.
💡 Why?
Salesforce maintains a sharing table for every record — every change that affects who can see a record triggers a background recalculation to update this table. On large orgs, sharing recalculation for OWD changes can take hours and lock record updates during processing — a production impact consideration Accenture architects plan for.
🌍 Real World Example
Client changed OWD on Account from Public Read Only to Private on a 2M record org. Recalculation ran for 6 hours — users couldn't see Account records properly during that time. Lesson: plan OWD changes for off-peak hours and communicate downtime. Use "Defer Sharing Calculations" for large batch operations to batch the recalculation.
🔑 Key Points for Interviewer
- OWD change: full recalculation on all records — most expensive
- Ownership change: recalculates sharing for that specific record
- Criteria-based sharing: recalculates when field values in the criteria change
- Defer Sharing Calculations: Setup option to batch recalculations
- Plan OWD changes for maintenance windows on large orgs
🎤 One-Line Answer
"Sharing recalculates when OWD changes, sharing rules are modified, ownership changes, or criteria-based sharing field values change — OWD changes trigger full recalculation, which can take hours on large orgs."
Q29
What is a Permission Set Group and how does it simplify user management?
✅ Direct Answer
Permission Set Groups bundle multiple Permission Sets into one unit assigned to users — instead of assigning 5 separate Permission Sets to every Sales Rep, create one "Sales Rep" Permission Set Group containing all 5. Assign the group; all included Permission Sets apply automatically. Muting Permission Sets can remove specific permissions from a group.
🌍 Real World Example
Before PSG: onboarding a new Sales Rep = assign 6 Permission Sets (Sales Access, Quote Access, Report Access, Approval Submit, Mobile Access, API Read). After PSG: assign 1 "Sales Rep" PSG = all 6 applied. New access needed company-wide? Add one Permission Set to the group — all 500 reps get it instantly. Accenture standardized this pattern for their FSI client — onboarding time reduced from 30 to 5 minutes.
🔑 Key Points for Interviewer
- PSG bundles multiple Permission Sets into one assignable unit
- Muting Permission Set: removes specific permissions from a PSG without removing PSets
- Changes to included PSets automatically apply to all PSG members
- Simplifies provisioning audits — one assignment per user job function
- Available in Enterprise Edition and above
🎤 One-Line Answer
"Permission Set Group bundles multiple Permission Sets into one assignment — assign one group per job function instead of 5+ individual Permission Sets, with Muting PSets to subtract specific permissions."
Q30
What are the different OAuth 2.0 flows in Salesforce and when do you use each?
✅ Direct Answer
Web Server Flow: user-facing apps where user logs in and grants permission. JWT Bearer: server-to-server, no user interaction, certificate-based (best for integrations). Client Credentials: machine-to-machine with consumer key/secret (simpler than JWT). Username-Password: legacy, avoid (credentials in request). Device Flow: IoT/CLI devices with limited input.
🌍 Real World Example
Accenture integration patterns: MuleSoft→Salesforce: Client Credentials (key/secret exchange for token, simple, machine-to-machine). Mobile App→Salesforce: Web Server Flow (user logs in, grants app permission). Nightly batch job: JWT Bearer (no user interaction, certificate-based, most secure for automated processes).
🔑 Key Points for Interviewer
- Web Server: user-interactive, browser redirect, most common for web apps
- JWT Bearer: certificate-based, server-to-server, most secure for automation
- Client Credentials: key/secret, machine-to-machine, simpler setup
- Username-Password: avoid — credentials in every token request
- Access token: 1-2 hours. Refresh token: long-lived, exchanges for new access token
🎤 One-Line Answer
"Web Server for user-facing apps. JWT Bearer for server-to-server automation (most secure). Client Credentials for machine-to-machine. Never Username-Password — credentials shouldn't travel in every token request."
⚙️ Section 3 — Apex & Triggers
Q31–Q50 · Intermediate → Advanced · Live Coding Expected
Q31
What is bulkification in Apex? Write a bulkified trigger example.
✅ Direct Answer
Bulkification means writing Apex that handles multiple records efficiently — all SOQL queries and DML operations outside of loops, processing collections instead of individual records. Salesforce processes up to 200 records per trigger invocation, so code must handle all 200 in one transaction without exceeding governor limits.
🌍 Real World Example
// BAD - SOQL inside loop
trigger Bad on Account (after insert) {
for(Account a : Trigger.new) {
List<Contact> c = [SELECT Id FROM Contact WHERE AccountId = :a.Id]; // 200 queries!
}
}
// GOOD - Bulkified
trigger Good on Account (after insert) {
// Collect all IDs first
Set<Id> accIds = new Set<Id>();
for(Account a : Trigger.new) { accIds.add(a.Id); }
// One SOQL for all records
Map<Id, List<Contact>> contactMap = new Map<Id, List<Contact>>();
for(Contact c : [SELECT Id, AccountId FROM Contact WHERE AccountId IN :accIds]) {
if(!contactMap.containsKey(c.AccountId)) contactMap.put(c.AccountId, new List<Contact>());
contactMap.get(c.AccountId).add(c);
}
// Process each account with pre-fetched contacts
for(Account a : Trigger.new) {
List<Contact> contacts = contactMap.get(a.Id);
// process contacts
}
}
🔑 Key Points for Interviewer
- Never SOQL or DML inside a for loop
- Use Set to collect IDs, Map for lookups, List for DML
- Trigger.new: all records in transaction (up to 200)
- Accenture live coding test: write a bulkified trigger in 10 minutes
- Pattern: collect → query → process → DML
🎤 One-Line Answer
"Bulkification = never SOQL or DML inside loops — collect all IDs, query once with IN clause, use Map for O(1) lookups, batch DML at the end. This is the single most important Apex pattern."
Q32
Write an Apex trigger that throws an error if Amount > $50,000 for non-System Administrator users.
✅ Direct Answer
Before trigger on Opportunity — checks running user's profile, applies limit only for non-admins.
🌍 Real World Example
trigger OpportunityAmountLimit on Opportunity (before insert, before update) {
// Query profile ONCE outside loop
String profileName = [SELECT Name FROM Profile WHERE Id = :UserInfo.getProfileId()].Name;
if(profileName != 'System Administrator') {
for(Opportunity opp : Trigger.new) {
if(opp.Amount != null && opp.Amount > 50000) {
opp.Amount.addError(
'Non-admin users cannot create Opportunities exceeding $50,000. ' +
'Contact your manager for approval.'
);
}
}
}
}
🔑 Key Points for Interviewer
- Before trigger — validation before save, error prevents record from saving
- UserInfo.getProfileId() — gets running user's profile ID
- Profile SOQL outside loop — one query total
- addError() on field highlights that specific field in the UI
- Accenture expects you to write this from memory in the interview
🎤 One-Line Answer
"Before trigger: query profile name once, check non-admin AND Amount > 50K, call opp.Amount.addError() to block save — field-level error highlights the Amount field in the UI."
Q33
How do you prevent recursion in Apex triggers?
✅ Direct Answer
Use a static Boolean variable in a utility class. Set to true when the trigger first fires — check at trigger start and return immediately if already true. Since static variables persist for the entire transaction, the trigger only runs logic once regardless of how many times it's called.
🌍 Real World Example
// TriggerUtil class
public class TriggerUtil {
public static Boolean hasAccountTriggerRun = false;
}
// Trigger
trigger AccountTrigger on Account (after update) {
if(TriggerUtil.hasAccountTriggerRun) return; // Exit if already ran
TriggerUtil.hasAccountTriggerRun = true;
List<Account> toUpdate = new List<Account>();
for(Account a : Trigger.new) {
if(a.Industry != Trigger.oldMap.get(a.Id).Industry) {
toUpdate.add(new Account(Id = a.Id, Description = 'Industry updated'));
}
}
if(!toUpdate.isEmpty()) update toUpdate; // Won't re-fire — hasRun = true
}
🔑 Key Points for Interviewer
- Static variable = transaction-scoped (resets between separate transactions)
- For finer control: use Set<Id> of processed record IDs
- One trigger per object — all events in one file prevents inter-trigger recursion
- Recursion causes CPU limit errors — always prevent
- Accenture asks this with live coding — practice writing it
🎤 One-Line Answer
"Static Boolean in utility class — check at trigger start, return if true, set to true on first run. Since statics persist for the transaction, logic only executes once regardless of re-entrant calls."
Q34
What is the difference between before and after triggers? When do you use each?
✅ Direct Answer
Before triggers: fire before record commits — modify Trigger.new directly without DML (free), records have no Id yet on insert, used for field validation and derivation. After triggers: fire after commit — records have Ids, need explicit DML to update triggering record, used when creating/updating related records that need the new record's Id.
🌍 Real World Example
Before trigger on Lead insert: set Rating = 'Hot' if LeadSource = 'Web' — modifies Trigger.new.Rating directly, no DML, no governor limit cost. After trigger on Opportunity insert: create a related Task__c using the new Opportunity's Id (only available after save). Wrong choice — using after trigger for field update wastes a DML operation.
🔑 Key Points for Interviewer
- Before insert: Trigger.new has no Id — don't use Id fields
- Before update: Trigger.old = before values, Trigger.new = new values
- After triggers: Trigger.new has Ids — required for related record operations
- Before: free field modification. After: explicit DML required
- Most field defaulting/derivation = before. Related record creation = after
🎤 One-Line Answer
"Before: modify fields without DML cost (record not yet saved, no Id on insert). After: record has Id, use for related record creation — don't waste a DML on after trigger for simple field updates."
Q35
What is Batch Apex? Write a basic batch class example.
✅ Direct Answer
Batch Apex processes large record sets asynchronously in chunks (up to 2,000 per execute()) with fresh governor limits per chunk. Implements Database.Batchable interface with start() (defines scope), execute() (processes each chunk), and finish() (post-processing). Use for processing millions of records beyond single-transaction limits.
🌍 Real World Example
global class AccountRatingBatch implements Database.Batchable<SObject> {
global Database.QueryLocator start(Database.BatchableContext bc) {
return Database.getQueryLocator(
'SELECT Id, Rating FROM Account WHERE Rating = null'
);
}
global void execute(Database.BatchableContext bc, List<Account> scope) {
for(Account a : scope) {
a.Rating = 'Cold'; // Default rating
}
update scope;
}
global void finish(Database.BatchableContext bc) {
// Send completion email to admin
Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
mail.setToAddresses(new String[] {'admin@company.com'});
mail.setSubject('Account Rating Batch Completed');
Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });
}
}
// Execute: Database.executeBatch(new AccountRatingBatch(), 200);
🔑 Key Points for Interviewer
- Max 5 concurrent batch jobs per org
- Chunk size 1-2000, default 200 — smaller if hitting limits in execute()
- Database.Stateful: preserves instance variables between chunks
- Database.AllowsCallouts: enables HTTP callouts in execute()
- QueryLocator: handles up to 50M records efficiently
🎤 One-Line Answer
"Batch Apex processes millions of records in chunks (up to 2000/execute) with fresh governor limits per chunk — implement start()/execute()/finish(), use QueryLocator for 50M record capacity."
Q36
What is Queueable Apex and how is it different from @future?
✅ Direct Answer
@future: simple async, primitive parameters only, no chaining, no job monitoring, no SObjects. Queueable: implements Queueable interface, passes complex types including SObjects, chainable (System.enqueueJob in execute()), has Job ID for monitoring, supports callouts with Database.AllowsCallouts. Queueable is the modern replacement for @future.
🌍 Real World Example
public class ProcessOrdersQueueable implements Queueable, Database.AllowsCallouts {
private List<Id> orderIds;
public ProcessOrdersQueueable(List<Id> ids) {
this.orderIds = ids;
}
public void execute(QueueableContext ctx) {
// Process orders
List<Order__c> orders = [SELECT Id, Status__c FROM Order__c WHERE Id IN :orderIds];
for(Order__c o : orders) { o.Status__c = 'Processed'; }
update orders;
// Make callout to ERP
Http http = new Http();
HttpRequest req = new HttpRequest();
req.setEndpoint('callout:ERP_System/orders');
req.setMethod('POST');
http.send(req);
// Chain next queueable if needed
// System.enqueueJob(new NextStepQueueable(orderIds));
}
}
🔑 Key Points for Interviewer
- @future: primitive params only, no SObjects, no chaining, no monitoring
- Queueable: SObject params, chainable, Job ID, callouts allowed
- Max 1 chain from execute() in production (unlimited in tests)
- Queueable replaces @future in all modern Apex patterns
- Use @future only for simple fire-and-forget with primitive data
🎤 One-Line Answer
"Queueable > @future: accepts SObjects, chainable, has Job ID for monitoring, supports callouts — use Queueable for all modern async needs; @future only for simple fire-and-forget operations."
Q37
What is a Trigger Framework and why should you always use one?
✅ Direct Answer
A Trigger Framework separates trigger logic from the trigger file — the trigger file is minimal (just calls a handler class). The handler class organizes all business logic by event type (beforeInsert, afterUpdate). This prevents multiple triggers per object, enables independent unit testing of each method, and makes code maintainable by any developer.
🌍 Real World Example
// Trigger - minimal, just delegates
trigger OpportunityTrigger on Opportunity (before insert, before update, after insert, after update) {
OpportunityTriggerHandler handler = new OpportunityTriggerHandler();
if(Trigger.isBefore && Trigger.isInsert) handler.beforeInsert(Trigger.new);
if(Trigger.isBefore && Trigger.isUpdate) handler.beforeUpdate(Trigger.new, Trigger.oldMap);
if(Trigger.isAfter && Trigger.isInsert) handler.afterInsert(Trigger.new);
if(Trigger.isAfter && Trigger.isUpdate) handler.afterUpdate(Trigger.new, Trigger.oldMap);
}
// Handler - all business logic here, easily testable
public with sharing class OpportunityTriggerHandler {
public void beforeInsert(List<Opportunity> newOpps) {
OpportunityService.setDefaultStage(newOpps);
}
public void afterInsert(List<Opportunity> newOpps) {
OpportunityService.createWelcomeTask(newOpps);
}
// etc.
}
🔑 Key Points for Interviewer
- One trigger file per object — never multiple triggers on same object
- Handler class contains all logic — testable independently
- Service classes for reusable logic called by multiple handlers
- Popular patterns: Kevin O'Hara's TriggerHandler, FFLIB, Apex Enterprise Patterns
- Accenture mandates FFLIB/Enterprise Patterns on all large engagements
🎤 One-Line Answer
"Trigger Framework = one trigger file delegates to handler class — separates logic from events, enables independent unit testing, prevents multiple trigger conflicts, makes any developer productive on day one."
Q38
What is Test.startTest() and Test.stopTest() and why are they essential?
✅ Direct Answer
Test.startTest(): resets governor limits for the code under test — test setup data doesn't eat into limits the unit being tested needs. Test.stopTest(): forces all async operations (@future, Batch, Queueable) to execute synchronously right now — without it, async jobs haven't run yet when you write assertions.
🌍 Real World Example
@isTest
static void testAccountBatch() {
// Setup (consumes some limits — that's fine)
List<Account> accs = new List<Account>();
for(Integer i = 0; i < 200; i++) accs.add(new Account(Name = 'Test ' + i));
insert accs;
Test.startTest(); // Reset limits + start async tracking
Database.executeBatch(new AccountRatingBatch(), 200);
Test.stopTest(); // Batch runs synchronously RIGHT NOW
// NOW we can assert on batch results
Integer updated = [SELECT COUNT() FROM Account WHERE Rating = 'Cold'];
System.assertEquals(200, updated, 'All 200 accounts should have Rating set to Cold');
}
🔑 Key Points for Interviewer
- startTest(): resets limits AND starts async tracking
- stopTest(): forces all async work to complete synchronously
- Test data setup BEFORE startTest()
- Assertions AFTER stopTest()
- Always wrap the unit under test between startTest/stopTest — not the assertions
🎤 One-Line Answer
"startTest() resets governor limits. stopTest() forces async operations (batch, @future) to complete synchronously — without stopTest(), your assertions run before async work finishes."
Q39
How do you mock HTTP callouts in Apex tests using Test.setMock()?
✅ Direct Answer
Test.setMock() registers a class implementing HttpCalloutMock that intercepts HTTP requests during test execution — returning a predefined response instead of making a real API call (which is blocked in test context). Without setMock(), any test triggering a callout throws "Callout not allowed" exception.
🌍 Real World Example
// 1. Create mock class
@isTest
public class ERPMockResponse implements HttpCalloutMock {
public HTTPResponse respond(HTTPRequest req) {
HTTPResponse res = new HTTPResponse();
res.setBody('{"orderId":"12345","status":"success"}');
res.setStatusCode(200);
return res;
}
}
// 2. Use in test
@isTest
static void testERPCallout() {
Test.setMock(HttpCalloutMock.class, new ERPMockResponse());
Test.startTest();
String result = ERPService.createOrder('ORD-001', 5000);
Test.stopTest();
System.assertEquals('12345', result, 'Should return the order ID from ERP');
}
🔑 Key Points for Interviewer
- HttpCalloutMock: for REST callouts
- WebServiceMock: for SOAP callouts
- StaticResourceCalloutMock: uses Static Resource JSON for complex responses
- Can set different responses per URL using MultiRequestMock pattern
- Accenture integration projects always require proper callout mock tests
🎤 One-Line Answer
"Test.setMock() registers an HttpCalloutMock implementation that intercepts HTTP requests in tests — returns predefined response instead of failing with 'Callout not allowed' error."
Q40
What is the difference between Database.insert() and the insert DML statement?
✅ Direct Answer
insert statement: all-or-nothing — if any record fails, entire operation rolls back and throws an exception. Database.insert(records, false): partial success — inserts successful records, returns Database.SaveResult[] with per-record success/failure information. Use Database.insert for bulk integrations where partial success is acceptable.
🌍 Real World Example
List<Account> accounts = new List<Account> {
new Account(Name = 'Valid Company'),
new Account() // Missing required Name — will fail
};
Database.SaveResult[] results = Database.insert(accounts, false); // allOrNone = false
for(Database.SaveResult sr : results) {
if(sr.isSuccess()) {
System.debug('Inserted: ' + sr.getId());
} else {
for(Database.Error err : sr.getErrors()) {
System.debug('Error: ' + err.getMessage() + ' on fields: ' + err.getFields());
}
}
}
🔑 Key Points for Interviewer
- insert keyword: all-or-nothing, simpler code, throws DmlException
- Database.insert(list, false): partial success, error per record
- Database.SaveResult[]: check isSuccess() per record
- Also: Database.update(), Database.delete(), Database.upsert()
- Use partial success in integrations where some records may be invalid
🎤 One-Line Answer
"insert keyword is all-or-nothing (rolls back on any failure). Database.insert(list, false) allows partial success — successful records save, failed ones return per-record errors without rolling back the rest."
Q41
What are OOPS concepts in Apex? Explain with Salesforce examples.
✅ Direct Answer
Apex supports all four OOP pillars: Encapsulation (private variables, public methods), Inheritance (class extension with virtual/override), Polymorphism (method overloading and overriding), Abstraction (abstract classes and interfaces like Database.Batchable, Queueable).
🌍 Real World Example
// Encapsulation
public class AccountService {
private List<Account> accounts;
public List<Account> getAccounts() { return accounts; } // controlled access
}
// Inheritance + Polymorphism
public virtual class BaseTriggerHandler {
public virtual void beforeInsert(List<SObject> newRecords) {}
}
public class AccountTriggerHandler extends BaseTriggerHandler {
public override void beforeInsert(List<SObject> newRecords) {
// Account-specific logic
}
}
// Abstraction - Interface (Database.Batchable is itself an interface)
public interface IProcessor { void process(List<SObject> records); }
public class AccountProcessor implements IProcessor {
public void process(List<SObject> records) { /* logic */ }
}
🔑 Key Points for Interviewer
- virtual: allows method override in child class
- override: marks overriding method in child
- abstract: can't instantiate — must be extended
- interface: contract, no implementation — Batchable and Queueable are interfaces
- Accenture uses inheritance heavily in FFLIB/Enterprise Patterns
🎤 One-Line Answer
"Apex supports all 4 OOP pillars — Encapsulation (private/public), Inheritance (virtual/extends/override), Polymorphism (overloading/overriding), Abstraction (abstract/interface) — Database.Batchable itself is an interface example."
Q42
What is a Wrapper Class in Apex and when do you use it?
✅ Direct Answer
A Wrapper Class bundles multiple data types — SObjects, primitives, computed values — into a single unit for passing between Apex and LWC/Visualforce. Used when a single SObject can't represent the data structure needed by the UI — for example, combining Account data with a computed isSelected flag and related record count.
🌍 Real World Example
public class AccountWrapper {
@AuraEnabled public Account account { get; set; }
@AuraEnabled public Boolean isSelected { get; set; }
@AuraEnabled public Integer opportunityCount { get; set; }
public AccountWrapper(Account acc, Integer oppCount) {
this.account = acc;
this.isSelected = false;
this.opportunityCount = oppCount;
}
}
@AuraEnabled(cacheable=true)
public static List<AccountWrapper> getAccountsWithCount() {
List<AccountWrapper> result = new List<AccountWrapper>();
for(Account a : [SELECT Id, Name, (SELECT Id FROM Opportunities) FROM Account LIMIT 100]) {
result.add(new AccountWrapper(a, a.Opportunities.size()));
}
return result;
}
🔑 Key Points for Interviewer
- @AuraEnabled on each property makes it accessible from LWC
- Can define as inner class inside controller for cleaner packaging
- Use for: multi-SObject data, computed fields, checkbox list items
- JSON serialization: wrapper auto-serialized when returned to LWC
- Implement Comparable interface to enable sorting of wrapper lists
🎤 One-Line Answer
"Wrapper class bundles SObjects + computed fields into one unit for LWC — use when a single SObject can't represent the full data structure the UI needs."
Q43
What is a Custom Exception in Apex and when should you create one?
✅ Direct Answer
Custom Exceptions extend the Exception class and allow you to throw domain-specific errors with meaningful names — making error handling explicit and readable. Create one when you need to distinguish between different error types in catch blocks or when standard DmlException/QueryException don't convey the business context.
🌍 Real World Example
// Define custom exception
public class OrderValidationException extends Exception {}
public class InsufficientInventoryException extends Exception {}
// Use in service class
public class OrderService {
public static void createOrder(Order__c order) {
if(order.Amount__c <= 0) {
throw new OrderValidationException('Order amount must be greater than zero');
}
Integer stock = getInventoryLevel(order.Product__c);
if(stock < order.Quantity__c) {
throw new InsufficientInventoryException(
'Only ' + stock + ' units available. Requested: ' + order.Quantity__c
);
}
insert order;
}
}
// Caller can catch specifically
try {
OrderService.createOrder(order);
} catch(InsufficientInventoryException e) {
// Show inventory alert
} catch(OrderValidationException e) {
// Show validation error
}
🔑 Key Points for Interviewer
- Extend Exception class: public class MyException extends Exception {}
- One line — Apex generates constructor, getMessage(), etc. automatically
- Enables specific catch blocks per error type
- More readable than generic DmlException in domain-specific code
- Accenture code standards require custom exceptions for service layer errors
🎤 One-Line Answer
"Custom Exceptions extend Exception in one line — enable specific catch blocks per error type and convey business context that generic DmlException or QueryException can't express."
Q44
What is the @InvocableMethod annotation and how does it connect Apex to Flows?
✅ Direct Answer
@InvocableMethod exposes an Apex method as a Flow Action — callable from Record-Triggered Flows, Screen Flows, and Auto-launched Flows via the Action element. It bridges declarative Flow with programmatic Apex for logic that's too complex for Flow alone (callouts, complex calculations, multi-object operations).
🌍 Real World Example
public class TaxCalculator {
@InvocableMethod(label='Calculate Tax' description='Calculates GST for order amount' category='Order')
public static List<Result> calculateTax(List<Request> requests) {
List<Result> results = new List<Result>();
for(Request req : requests) {
Result res = new Result();
res.taxAmount = req.orderAmount * 0.18; // 18% GST
res.totalAmount = req.orderAmount + res.taxAmount;
results.add(res);
}
return results;
}
public class Request {
@InvocableVariable(required=true) public Decimal orderAmount;
}
public class Result {
@InvocableVariable public Decimal taxAmount;
@InvocableVariable public Decimal totalAmount;
}
}
🔑 Key Points for Interviewer
- Must accept List and return List — handles bulkification automatically
- @InvocableVariable: annotates input/output parameters for Flow mapping
- One @InvocableMethod per class (Salesforce best practice)
- Can make callouts from invocable methods — enables Flow → external API
- label and description appear in Flow Builder Action picker
🎤 One-Line Answer
"@InvocableMethod exposes Apex as a Flow Action element — bridges Flow's declarative power with Apex's programmatic capability for callouts, complex calculations, and operations Flow can't natively perform."
Q45
What is Change Data Capture (CDC) and how is it different from Platform Events?
✅ Direct Answer
CDC automatically publishes change events whenever Salesforce records are created, updated, deleted, or undeleted — external systems subscribe via CometD or Apex triggers. Platform Events are developer-published custom events. CDC = Salesforce-generated for record changes. Platform Events = developer-controlled for any business event.
🌍 Real World Example
Accenture ERP integration: SAP needs to sync Account changes from Salesforce. Old approach: SAP polls Salesforce REST API every 5 minutes — 288 API calls/day per object. CDC approach: SAP subscribes to AccountChangeEvent — receives changes within seconds. changedFields header tells SAP exactly which fields changed — no need to compare full records. Zero polling, real-time, 99% fewer API calls.
🔑 Key Points for Interviewer
- CDC events: AccountChangeEvent, ContactChangeEvent, CustomObj__ChangeEvent
- changedFields header: only lists changed fields — efficient delta sync
- Replay events up to 3 days (vs 72 hours for Platform Events)
- CDC vs PE: CDC = Salesforce-generated. PE = developer-published
- Enable CDC in Setup → Change Data Capture per object
🎤 One-Line Answer
"CDC auto-publishes record change events (create/update/delete) — external systems subscribe for real-time sync without API polling. Platform Events are developer-published for custom business events."
Q46
What is the Apex REST web service (@RestResource) and how do you create one?
🌍 Real World Example
@RestResource(urlMapping='/orders/*')
global class OrderRestService {
@HttpGet
global static Order__c getOrder() {
RestRequest req = RestContext.request;
String orderId = req.requestURI.substring(req.requestURI.lastIndexOf('/') + 1);
return [SELECT Id, Name, Amount__c, Status__c FROM Order__c WHERE Id = :orderId];
}
@HttpPost
global static String createOrder(String productName, Decimal amount) {
Order__c order = new Order__c(Name = productName, Amount__c = amount, Status__c = 'New');
insert order;
return order.Id;
}
@HttpDelete
global static String cancelOrder() {
RestRequest req = RestContext.request;
String orderId = req.requestURI.substring(req.requestURI.lastIndexOf('/') + 1);
Order__c order = [SELECT Id FROM Order__c WHERE Id = :orderId];
order.Status__c = 'Cancelled';
update order;
return 'Order cancelled';
}
}
// URL: https://instance.salesforce.com/services/apexrest/orders/
🔑 Key Points for Interviewer
- @RestResource: defines URL mapping under /services/apexrest/
- @HttpGet, @HttpPost, @HttpPut, @HttpDelete, @HttpPatch: HTTP method annotations
- RestContext.request and RestContext.response: access request/response objects
- Requires Connected App + OAuth authentication to call from outside
- Test using RestRequest/RestResponse setup in test class
🎤 One-Line Answer
"@RestResource exposes Apex class as custom REST endpoint at /services/apexrest/ — use @HttpGet/@HttpPost methods; requires OAuth authentication from external callers."
Q47
What is a Schedulable Apex class and how do you schedule it?
🌍 Real World Example
public class NightlyCleanupScheduler implements Schedulable {
public void execute(SchedulableContext ctx) {
// Call a batch job for the actual work
Database.executeBatch(new ExpiredRecordsBatch(), 200);
}
}
// Schedule via Apex (one-time setup)
String cronExp = '0 0 2 * * ?'; // Every day at 2:00 AM
String jobName = 'Nightly Cleanup Job';
System.schedule(jobName, cronExp, new NightlyCleanupScheduler());
// OR schedule via Setup → Scheduled Jobs UI (no code needed)
🔑 Key Points for Interviewer
- Implements Schedulable interface — one execute() method
- Cron expression: Seconds Minutes Hours Day Month Weekday Year
- System.schedule(): programmatic scheduling from Apex
- Max 100 scheduled jobs per org
- Don't do heavy work in execute() — delegate to Batch or Queueable
🎤 One-Line Answer
"Schedulable implements one execute() method — schedule with System.schedule() using a CRON expression. Always delegate heavy work to Batch Apex inside execute(), don't process records directly."
Q48
What is the difference between a static method and an instance method in Apex?
✅ Direct Answer
Static methods belong to the class — called as ClassName.method(), no instantiation needed, no access to instance variables. Instance methods belong to an object — called on an instance (new ClassName()).method(), have access to all instance variables. Static variables are shared across all instances; instance variables are per-object.
🌍 Real World Example
public class AccountUtils {
// Static - utility method, no state needed
public static String formatAccountName(String name) {
return name.trim().capitalize();
}
// Instance - needs object state
private List<Account> processedAccounts = new List<Account>();
public void processAccount(Account a) {
a.Name = formatAccountName(a.Name); // Can call static methods
processedAccounts.add(a);
}
public List<Account> getProcessed() { return processedAccounts; }
}
// Usage
String name = AccountUtils.formatAccountName(' acme corp '); // Static - no new needed
AccountUtils utils = new AccountUtils(); // Instance - needs new
utils.processAccount(myAccount);
🔑 Key Points for Interviewer
- Static: shared, no instance, utility/helper methods
- Instance: per-object, stateful, business logic with maintained state
- Static variables persist for entire transaction (used for recursion prevention)
- Test methods are always static (@isTest static void)
- Service class pattern: static methods for stateless operations
🎤 One-Line Answer
"Static methods belong to the class (no instance needed), share state across calls, great for utilities. Instance methods belong to objects, have their own state — use when maintaining context between method calls."
Q49
What are the best practices for writing Apex test classes?
✅ Direct Answer
Key practices: 75% minimum coverage (target 85%+), always use System.assert with meaningful failure messages, use Test.startTest()/stopTest() for async code, create test data with TestDataFactory (never use seeAllData=true), test positive AND negative scenarios, one assertion per logical outcome, mock all callouts.
🌍 Real World Example
@isTest
private class OrderServiceTest {
@TestSetup
static void setupData() {
// Create data once for all tests in class
Account acc = TestDataFactory.createAccount('Test Corp');
insert acc;
}
@isTest
static void testCreateOrder_Success() {
Account acc = [SELECT Id FROM Account LIMIT 1];
Test.startTest();
Order__c order = OrderService.createOrder(acc.Id, 5000);
Test.stopTest();
System.assertNotEquals(null, order.Id, 'Order should be created');
System.assertEquals('New', order.Status__c, 'New order status should be New');
System.assertEquals(5000, order.Amount__c, 'Amount should match');
}
@isTest
static void testCreateOrder_NegativeAmount() {
Account acc = [SELECT Id FROM Account LIMIT 1];
Test.startTest();
try {
OrderService.createOrder(acc.Id, -100);
System.assert(false, 'Should have thrown OrderValidationException');
} catch(OrderValidationException e) {
System.assert(e.getMessage().contains('greater than zero'), 'Error message mismatch');
}
Test.stopTest();
}
}
🔑 Key Points for Interviewer
- @TestSetup: runs once before all tests in class — share data setup cost
- Never seeAllData=true: isolates tests from org data changes
- TestDataFactory: centralized test data creation for consistency
- Test both happy path AND exception/error scenarios
- Accenture code reviews reject tests without assertions
🎤 One-Line Answer
"Best practices: @TestSetup for shared data, seeAllData=false, TestDataFactory, test positive AND negative scenarios, meaningful assertions on every outcome, mock callouts — coverage proves execution, assertions prove correctness."
Q50
What is the difference between System.debug() and using Apex logs for debugging?
✅ Direct Answer
System.debug() writes to the debug log — visible in Developer Console or SFDX logs. Debug levels control verbosity (NONE, ERROR, WARN, INFO, DEBUG, FINE, FINER, FINEST). In production, remove or reduce debug statements — they consume heap and CPU. Use Apex Exception Email for production error alerting instead.
🌍 Real World Example
// Debug levels in practice
System.debug(LoggingLevel.ERROR, 'Critical: Order failed for ID: ' + orderId);
System.debug(LoggingLevel.INFO, 'Processing ' + records.size() + ' records');
System.debug(LoggingLevel.FINE, 'Entering calculateTax method');
// Production pattern - conditional debug
if(Test.isRunningTest()) {
System.debug('Test mode: ' + variable);
}
// Better: use a configurable logger class
Logger.log('Order created: ' + orderId, LoggingLevel.INFO);
🔑 Key Points for Interviewer
- Debug log max size: 20 MB — excessive logging truncates the log
- LoggingLevel: controls which messages appear based on org/user settings
- Developer Console: view real-time debug logs during development
- Checkpoints: pause execution at specific lines to inspect variables
- Remove FINEST/FINER debug logs before production deployment
🎤 One-Line Answer
"System.debug() writes to Apex debug logs with configurable logging levels — use ERROR/WARN in production, FINE/FINER only in development. Excessive logging consumes heap and truncates the 20 MB log limit."
🔍 Section 4 — SOQL & SOSL
Q51–Q60 · Intermediate · Query Optimization & Security Focus
Q51
What is the difference between SOQL and SOSL?
✅ Direct Answer
SOQL: queries one object at a time (with related objects via relationship queries), returns List<SObject>, precise filtering with WHERE clause. SOSL: searches across multiple objects using text search index, returns List<List<SObject>>, faster for cross-object text searches. Use SOQL for precise data retrieval; SOSL for global search across objects.
🌍 Real World Example
// SOQL - precise, one object
List<Account> accs = [SELECT Id, Name FROM Account WHERE Industry = 'Technology' AND AnnualRevenue > 1000000];
// SOSL - cross-object text search
List<List<SObject>> results = [FIND 'Accenture*' IN ALL FIELDS
RETURNING Account(Id, Name), Contact(Id, Name, Email), Lead(Id, Name)];
List<Account> accounts = (List<Account>) results[0];
List<Contact> contacts = (List<Contact>) results[1];
// One search found Accenture in Accounts AND Contacts simultaneously
🔑 Key Points for Interviewer
- SOQL: one object, WHERE clause, List<SObject> result
- SOSL: multiple objects, text search index, List<List<SObject>> result
- SOSL faster for text search — uses pre-built search index
- SOSL won't search formula fields, rich text, or non-indexed fields
- Both count toward governor limits (SOQL: 100, SOSL: 20 per transaction)
🎤 One-Line Answer
"SOQL queries one object precisely with WHERE clause filtering. SOSL searches text across multiple objects simultaneously — use SOQL for data retrieval, SOSL for global search scenarios."
Q52
How do you write parent-to-child and child-to-parent relationship queries in SOQL?
🌍 Real World Example
// Child-to-Parent: dot notation on relationship field
List<Contact> contacts = [
SELECT Id, Name, Account.Name, Account.Industry, Account.Owner.Name
FROM Contact
WHERE Account.Industry = 'Technology'
];
// Parent-to-Child: subquery using child relationship name
List<Account> accounts = [
SELECT Id, Name,
(SELECT Id, Name, Email FROM Contacts WHERE IsActive__c = true ORDER BY Name),
(SELECT Id, Amount, StageName FROM Opportunities WHERE IsClosed = false)
FROM Account
WHERE Type = 'Customer'
];
// Access subquery results
for(Account acc : accounts) {
System.debug('Account: ' + acc.Name + ' has ' + acc.Contacts.size() + ' contacts');
for(Contact c : acc.Contacts) { System.debug(c.Name); }
}
// Custom relationship names: remove __c → add __r
// Order__c → Order__r (child-to-parent)
// Order_Line_Items__r (parent-to-child, pluralized)
🔑 Key Points for Interviewer
- Child-to-parent: AccountId → Account.Name (standard), Obj__r.Field__c (custom)
- Parent-to-child: (SELECT Id FROM Contacts) — standard: Contacts, custom: Custom_Objects__r
- Up to 5 levels of child-to-parent traversal
- Subquery result: up to 200 child records per parent in one query
- Custom lookup: replace __c with __r for relationship traversal
🎤 One-Line Answer
"Child-to-parent: dot notation (Account.Name). Parent-to-child: subquery (SELECT Name FROM Contacts). Custom relationships use __r instead of __c — Account__r.Name or (SELECT Id FROM Custom_Objects__r)."
Q53
What is SOQL injection and how do you prevent it?
✅ Direct Answer
SOQL injection occurs when user-supplied input is directly concatenated into a dynamic SOQL string — allowing attackers to modify query logic and access unauthorized records. Prevent with: bind variables (:variable — completely injection-proof), or String.escapeSingleQuotes() for field/object names that must be dynamic.
🌍 Real World Example
// VULNERABLE - injection possible
String name = ApexPages.currentPage().getParameters().get('name');
// Attacker enters: ' OR Name != ' → query becomes: WHERE Name = '' OR Name != ''
String q = 'SELECT Id FROM Account WHERE Name = \'' + name + '\'';
// SAFE - bind variable (best approach)
String q = 'SELECT Id FROM Account WHERE Name = :name'; // name bound safely
List<Account> accs = Database.query(q);
// SAFE - escaped (for dynamic field/object names)
String safeName = String.escapeSingleQuotes(name);
String q2 = 'SELECT Id FROM Account WHERE Name = \'' + safeName + '\'';
🔑 Key Points for Interviewer
- Risk ONLY in dynamic SOQL (Database.query) — static SOQL is always safe
- Bind variables (:var): safest — platform handles escaping automatically
- String.escapeSingleQuotes(): sanitizes string values in dynamic queries
- Database.queryWithBinds(): explicit bind map for fully dynamic SOQL
- Accenture security reviews specifically scan for SOQL injection vulnerabilities
🎤 One-Line Answer
"SOQL injection manipulates dynamic query logic via user input — prevent with bind variables (:variable) which are completely injection-proof, or String.escapeSingleQuotes() for dynamic field/object names."
Q54
What are SOQL aggregate functions and how do you use GROUP BY and HAVING?
🌍 Real World Example
// Aggregate functions with GROUP BY and HAVING
AggregateResult[] results = [
SELECT StageName,
COUNT(Id) oppCount,
SUM(Amount) totalValue,
AVG(Amount) avgValue,
MAX(Amount) maxDeal
FROM Opportunity
WHERE CloseDate = THIS_FISCAL_YEAR
AND IsClosed = false
GROUP BY StageName
HAVING SUM(Amount) > 500000 // Only stages with >500K pipeline
ORDER BY SUM(Amount) DESC
];
for(AggregateResult ar : results) {
System.debug(
'Stage: ' + ar.get('StageName') +
' | Count: ' + ar.get('oppCount') +
' | Total: £' + ar.get('totalValue')
);
}
🔑 Key Points for Interviewer
- Functions: COUNT, SUM, AVG, MIN, MAX, COUNT_DISTINCT
- Returns AggregateResult[] — access with ar.get('fieldOrAlias')
- WHERE: filters individual records BEFORE aggregation
- HAVING: filters aggregate results AFTER GROUP BY
- GROUP BY ROLLUP: adds subtotals; GROUP BY CUBE: all combinations
🎤 One-Line Answer
"Aggregate functions (COUNT/SUM/AVG/MIN/MAX) with GROUP BY return AggregateResult[]. WHERE filters records before aggregation; HAVING filters the aggregated results — use ar.get('fieldName') to access values."
Q55
What is a selective query and why does it matter for large data volumes?
✅ Direct Answer
A selective query filters on indexed fields — allowing Salesforce's database to use an index scan instead of a full table scan. Non-selective queries on objects with >100K records can cause QUERY_TIMEOUT errors or extreme slowness. A query is selective when it returns less than 10% of total records (or less than 333K, whichever is less).
🌍 Real World Example
500K Account records. WHERE Custom_Segment__c = 'Enterprise' (non-indexed custom field) — full table scan, potential 10-second timeout. Fix options: 1) Request custom index on Custom_Segment__c via Salesforce Support. 2) Use External ID field (auto-indexed). 3) Add an auto-indexed field to WHERE: AND CreatedDate > LAST_N_DAYS:365. Developer Console → Query Plan shows "TableScan" vs "Index" for each WHERE condition.
🔑 Key Points for Interviewer
- Auto-indexed: Id, Name, OwnerId, CreatedDate, SystemModStamp, RecordTypeId
- Custom index: request via Salesforce Support, takes 24-48 hours
- Query Plan tool: Developer Console → Query Editor → Query Plan button
- Selectivity rule: WHERE must narrow to <10% of records
- Large Data Volumes (LDV): critical concern on every Accenture enterprise project
🎤 One-Line Answer
"Selective queries filter on indexed fields for index scans — non-selective queries on large objects cause timeouts. Use Query Plan tool to verify; request custom indexes for frequently-queried non-indexed fields."
Q56
What are SOQL Date Literals? Give five examples.
✅ Direct Answer
SOQL Date Literals are dynamic date expressions that auto-calculate relative to now — no hardcoded dates or Apex variables needed. Essential for scheduled batch queries that must always query relative to run time.
🌍 Real World Example
// Five practical examples
// 1. Leads created in last 30 days
List<Lead> recentLeads = [SELECT Id FROM Lead WHERE CreatedDate = LAST_N_DAYS:30];
// 2. Open opportunities closing this quarter
List<Opportunity> thisQtr = [SELECT Id FROM Opportunity WHERE CloseDate = THIS_FISCAL_QUARTER AND IsClosed = false];
// 3. Cases not updated in last 7 days (overdue)
List<Case> overdue = [SELECT Id FROM Case WHERE LastModifiedDate < LAST_N_DAYS:7 AND Status != 'Closed'];
// 4. Accounts created this month
List<Account> newAccs = [SELECT Id FROM Account WHERE CreatedDate = THIS_MONTH];
// 5. Tasks due in next 5 days
List<Task> upcoming = [SELECT Id FROM Task WHERE ActivityDate = NEXT_N_DAYS:5 AND Status != 'Completed'];
🔑 Key Points for Interviewer
- TODAY, YESTERDAY, THIS_WEEK, LAST_WEEK, THIS_MONTH, LAST_MONTH
- LAST_N_DAYS:n, NEXT_N_DAYS:n — most flexible
- THIS_FISCAL_QUARTER, LAST_FISCAL_YEAR — org's fiscal calendar
- Automatically handles time zones based on org settings
- Perfect for Scheduled Batch queries — always relative to run time
🎤 One-Line Answer
"Date Literals (TODAY, LAST_N_DAYS:30, THIS_FISCAL_QUARTER) dynamically calculate dates relative to now — no hardcoded dates or Apex variables, perfect for scheduled batch queries."
Q57
What is FOR UPDATE in SOQL and when should you use it?
✅ Direct Answer
FOR UPDATE locks queried records at the database level — no other transaction can modify them until the current transaction completes. Use when multiple concurrent processes might update the same records, preventing race conditions and data corruption (e.g., decrementing a counter or balance).
🌍 Real World Example
// Inventory deduction - concurrent transactions could both read same stock level
// WITHOUT FOR UPDATE: two transactions both read stock=10, both deduct 8, both succeed but stock goes negative
// WITH FOR UPDATE: second transaction waits until first completes
List<Inventory__c> inv = [SELECT Id, Stock__c FROM Inventory__c WHERE Product__c = :productId FOR UPDATE];
// Record locked - only this transaction can modify it
if(inv[0].Stock__c >= requestedQty) {
inv[0].Stock__c -= requestedQty;
update inv[0];
} else {
throw new InsufficientInventoryException('Insufficient stock');
}
// Lock released after DML
🔑 Key Points for Interviewer
- FOR UPDATE: row-level lock for transaction duration
- UNABLE_TO_LOCK_ROW error if lock can't be acquired within 10 seconds
- Not available in async context (@future, Batch)
- Use sparingly — can cause deadlocks if multiple processes lock different records in different orders
- Alternative: optimistic locking pattern (check version field before update)
🎤 One-Line Answer
"FOR UPDATE locks records for the transaction duration — prevents race conditions when concurrent processes read-then-write the same records. Use carefully; misuse causes UNABLE_TO_LOCK_ROW errors."
Q58
What is Database.getQueryLocator() vs Database.query() in Batch Apex?
✅ Direct Answer
Database.getQueryLocator(): used in Batch start() method, processes up to 50 million records using a server-side cursor — never loads all records into memory at once. Database.query(): standard dynamic SOQL, loads all results into heap memory, limited to ~50K records by heap size. Always use getQueryLocator in Batch start().
🌍 Real World Example
global class AccountBatch implements Database.Batchable<SObject> {
global Database.QueryLocator start(Database.BatchableContext bc) {
// QueryLocator: server-side cursor, 50M record capacity
return Database.getQueryLocator('SELECT Id, Rating FROM Account WHERE Rating = null');
// vs Database.query(): would fail with heap limit at ~50K records
}
global void execute(Database.BatchableContext bc, List<Account> scope) {
// scope = 200 records (or your batch size)
// Fresh governor limits per execute() call
for(Account a : scope) a.Rating = 'Cold';
update scope;
}
global void finish(Database.BatchableContext bc) {}
}
🔑 Key Points for Interviewer
- QueryLocator: 50M record limit, server-side cursor, memory-efficient
- Database.query(): 50K practical limit, loads into heap
- Iterable<T> alternative in start(): for non-SOQL data sources
- QueryLocator is always the right choice for Batch start()
- Chunk size (executeBatch 2nd param): how many records per execute() call
🎤 One-Line Answer
"getQueryLocator() handles 50M records via server-side cursor in Batch start(). Database.query() loads results into heap — limited to ~50K records. Always use getQueryLocator in Batch Apex."
Q59
What is a dynamic SOQL query and when would you use it?
✅ Direct Answer
Dynamic SOQL builds query strings at runtime using Database.query(String). Use when the WHERE clause, field list, or object name isn't known at compile time — typically in generic utility classes, configurable reporting tools, or metadata-driven frameworks where different callers query different objects/fields.
🌍 Real World Example
// Dynamic query utility - object and filter are runtime decisions
public static List<SObject> searchRecords(String objectName, Map<String, String> filters) {
String query = 'SELECT Id, Name FROM ' + String.escapeSingleQuotes(objectName);
if(!filters.isEmpty()) {
List<String> conditions = new List<String>();
for(String field : filters.keySet()) {
String val = filters.get(field);
// Use bind approach for values
conditions.add(String.escapeSingleQuotes(field) + ' = \'' + String.escapeSingleQuotes(val) + '\'');
}
query += ' WHERE ' + String.join(conditions, ' AND ');
}
return Database.query(query);
}
// Called for Account: searchRecords('Account', new Map<String,String>{'Industry'=>'Tech'})
// Called for Lead: searchRecords('Lead', new Map<String,String>{'Status'=>'Open'})
🔑 Key Points for Interviewer
- Database.query(String): execute dynamic SOQL at runtime
- Always sanitize field/object names: String.escapeSingleQuotes()
- Use bind variables (:varName) for value parameters — injection-proof
- Database.countQuery(): dynamic COUNT() without fetching records
- Dynamic SOQL in Apex is powerful but avoid over-engineering — static SOQL when possible
🎤 One-Line Answer
"Dynamic SOQL builds queries at runtime via Database.query(String) — use for configurable/generic frameworks, always sanitize object/field names with escapeSingleQuotes() and use bind variables for values."
Q60
What is the SOQL query limit for synchronous vs asynchronous contexts?
✅ Direct Answer
Synchronous (triggers, controllers, anonymous Apex): 100 SOQL queries per transaction. Asynchronous (@future, Batch execute(), Queueable): 200 SOQL queries per transaction. Both contexts also share the same DML limits: 150 DML statements and 10,000 DML rows per transaction.
🌍 Real World Example
Trigger fires synchronously — 100 SOQL limit. If it calls 3 different service classes each making queries, they all share the same 100-query pool. Move complex multi-query processing to a Queueable (200 limit). Batch execute() also gets 200 queries per chunk — fresh limit each chunk. Pattern: sync trigger handles simple validation (1-2 queries), delegates complex processing to async (full 200 limit).
🔑 Key Points for Interviewer
- Sync: 100 SOQL, 150 DML, 6 MB heap, 10,000 ms CPU
- Async: 200 SOQL, 150 DML, 12 MB heap, 60,000 ms CPU
- Batch execute(): each chunk gets fresh limits
- Trigger → @future: async gains more SOQL headroom
- Limits.getQueries() at runtime to check remaining quota
🎤 One-Line Answer
"Sync context: 100 SOQL queries. Async context (@future, Batch, Queueable): 200 SOQL queries per transaction — move complex multi-query logic to async to benefit from doubled limit headroom."
💻 Section 5 — Lightning Web Components (LWC)
Q61–Q75 · Intermediate → Advanced · Live Coding Common
Q61
What is LWC and how does it differ from Aura Components?
✅ Direct Answer
LWC uses modern web standards — native JavaScript ES6+, HTML5, Shadow DOM, Web Components specification. Aura uses Salesforce's proprietary framework with custom syntax. LWC is significantly faster (native browser APIs, smaller payloads), has better developer experience (standard JS tools work), and is Salesforce's strategic direction for all new development.
🌍 Real World Example
Aura: <aura:attribute name="accounts" type="List"/> — proprietary syntax, large framework overhead. LWC: @track accounts = []; — standard JavaScript, familiar to any web developer. LWC component loads 3x faster than equivalent Aura because it uses native browser APIs instead of Salesforce's abstraction layer. Accenture mandates LWC for all new components.
🔑 Key Points for Interviewer
- LWC: ES6+, Shadow DOM, modern standards, native performance
- Aura: proprietary syntax, heavier, legacy — still maintained but no new features
- LWC can be embedded in Aura (not vice versa)
- Both use Lightning Design System (SLDS) for styling
- Accenture: LWC for all new development; Aura only for maintaining existing
🎤 One-Line Answer
"LWC uses native web standards (ES6+, Shadow DOM) for faster performance vs Aura's proprietary framework — Salesforce's strategic direction; Accenture mandates LWC for all new component development."
Q62
Explain the LWC component lifecycle. How many times does renderedCallback fire?
✅ Direct Answer
Lifecycle order: constructor() (once, first) → connectedCallback() (once, when added to DOM) → render() (every render) → renderedCallback() (after EVERY render — fires multiple times). disconnectedCallback() fires once on removal. errorCallback() catches child component errors. renderedCallback fires on initial render AND every re-render triggered by property changes.
🌍 Real World Example
import { LightningElement, track } from 'lwc';
export default class MyComp extends LightningElement {
@track counter = 0;
isInitialized = false;
constructor() { super(); } // Fires ONCE — no DOM access here
connectedCallback() { // Fires ONCE — safe for data loading
this.loadInitialData();
}
renderedCallback() { // Fires EVERY RENDER — guard with boolean!
if(this.isInitialized) return;
this.isInitialized = true;
// Safe to access DOM elements — they exist after first render
const canvas = this.template.querySelector('canvas');
if(canvas) this.initializeChart(canvas);
}
incrementCounter() {
this.counter++; // Triggers re-render → renderedCallback fires again
// Without isInitialized guard, initializeChart() would fire every click!
}
}
🔑 Key Points for Interviewer
- constructor: 1x, no DOM, no parent properties yet
- connectedCallback: 1x, DOM available, use for data loading
- renderedCallback: EVERY render — always use isInitialized boolean guard
- disconnectedCallback: 1x, cleanup event listeners here
- This is Accenture's most asked LWC interview question
🎤 One-Line Answer
"connectedCallback fires once. renderedCallback fires after EVERY render — always guard it with isInitialized boolean to prevent duplicate initialization on property-change re-renders."
Q63
What is the difference between @api, @track, and @wire in LWC?
🌍 Real World Example
import { LightningElement, api, track, wire } from 'lwc';
import getAccounts from '@salesforce/apex/AccountController.getAccounts';
export default class AccountDashboard extends LightningElement {
@api recordId; // PUBLIC — parent passes this value in HTML
@api get title() { // PUBLIC getter with logic
return this._title || 'Accounts';
}
set title(val) { this._title = val.toUpperCase(); }
@track filters = { // REACTIVE — deep mutations tracked (arrays/objects)
industry: '',
type: ''
};
@wire(getAccounts, { // REACTIVE WIRE — re-fires when recordId changes
accId: '$recordId', // $ prefix = reactive parameter
industry: '$filters.industry'
})
wiredAccounts({ data, error }) {
if(data) this.accounts = data;
if(error) this.errorMsg = error.body.message;
}
}
🔑 Key Points for Interviewer
- @api: public interface — parent-to-child data flow, external access
- @track: reactive for deep object/array mutations (less needed in modern LWC)
- @wire: declarative Apex/data binding, $ prefix makes params reactive
- @wire returns {data, error} — always check both
- Wire re-fires automatically when reactive ($) parameters change
🎤 One-Line Answer
"@api = public property for parent access. @track = reactive deep mutations (less needed now). @wire = declarative Apex binding that auto-refires when $ parameters change — returns {data, error}."
Q64
How do parent and child LWC components communicate?
🌍 Real World Example
// CHILD COMPONENT (productCard.js)
import { LightningElement, api } from 'lwc';
export default class ProductCard extends LightningElement {
@api product; // Receives data FROM parent via @api
handleAddToCart() {
// Sends data TO parent via CustomEvent
this.dispatchEvent(new CustomEvent('addtocart', {
detail: { productId: this.product.Id, quantity: this.quantity },
bubbles: true, // Event propagates up DOM tree
composed: true // Event crosses Shadow DOM boundaries
}));
}
}
// PARENT HTML
// <c-product-card product={selectedProduct} onaddtocart={handleAddToCart}></c-product-card>
// PARENT JS
handleAddToCart(event) {
const { productId, quantity } = event.detail;
this.cartItems.push({ productId, quantity });
this.updateCartTotal();
}
// SIBLING: use Lightning Message Service (LMS) for unrelated components
🔑 Key Points for Interviewer
- Parent → Child: @api properties (one-way data binding)
- Child → Parent: CustomEvent via dispatchEvent
- Sibling/Unrelated: Lightning Message Service (LMS)
- bubbles + composed: required for event to cross Shadow DOM boundaries
- HTML event listener: on + eventname (e.g., onaddtocart)
🎤 One-Line Answer
"Parent→Child: @api properties. Child→Parent: CustomEvent with detail payload. Siblings/Unrelated: Lightning Message Service — each pattern is unidirectional by design for predictable data flow."
Q65
What is Lightning Message Service (LMS) and when do you use it?
✅ Direct Answer
LMS enables communication between LWC, Aura, and Visualforce components that have no parent-child relationship — across different DOM trees on the same Lightning page. Uses a Message Channel (metadata file) defining the message structure. Publisher calls publish(); subscribers call subscribe().
🌍 Real World Example
// Message Channel: force-app/main/default/messageChannels/ProductSelected.messageChannel-meta.xml
// PUBLISHER (product list component)
import { publish, MessageContext } from 'lightning/messageService';
import PRODUCT_SELECTED from '@salesforce/messageChannel/ProductSelected__c';
export default class ProductList extends LightningElement {
@wire(MessageContext) messageContext;
selectProduct(productId) {
publish(this.messageContext, PRODUCT_SELECTED, { productId, timestamp: Date.now() });
}
}
// SUBSCRIBER (product detail component — completely separate, no parent/child relationship)
import { subscribe, unsubscribe, MessageContext } from 'lightning/messageService';
export default class ProductDetail extends LightningElement {
@wire(MessageContext) messageContext;
subscription = null;
connectedCallback() {
this.subscription = subscribe(this.messageContext, PRODUCT_SELECTED, (msg) => {
this.loadProduct(msg.productId);
});
}
disconnectedCallback() {
unsubscribe(this.subscription); // Prevent memory leaks
this.subscription = null;
}
}
🔑 Key Points for Interviewer
- LMS: cross-component, cross-DOM-tree, cross-component-type communication
- Message Channel: custom metadata __c extension in messageChannels folder
- Works between LWC, Aura, and Visualforce on same Lightning page
- Always unsubscribe in disconnectedCallback to prevent memory leaks
- Preferred over pubsub library — native, officially supported by Salesforce
🎤 One-Line Answer
"LMS enables cross-component messaging between unrelated LWC, Aura, and VF components on the same page via Message Channels — always unsubscribe in disconnectedCallback to prevent memory leaks."
Q66
What is Shadow DOM in LWC and how does it affect CSS styling?
✅ Direct Answer
Shadow DOM creates an encapsulated boundary around each LWC component — parent CSS styles don't cascade into child components and child styles don't leak out to parents. This prevents style conflicts between components. To pass styles across Shadow DOM boundaries, use CSS custom properties (variables).
🌍 Real World Example
/* Parent CSS */
.card { background: red; font-size: 18px; }
/* This DOES NOT affect child component's .card elements — Shadow DOM blocks it */
/* To style across shadow boundary — CSS custom properties */
/* Parent CSS */
:host { --card-bg: blue; --card-padding: 16px; }
/* Child CSS — reads parent's custom property */
.card { background: var(--card-bg, white); padding: var(--card-padding, 8px); }
/* :host selector — styles the component's root element from within */
:host { display: block; margin: 16px 0; }
:host([variant="large"]) { font-size: 18px; } /* Conditional host styling */
🔑 Key Points for Interviewer
- Shadow DOM: each component's CSS is fully scoped — no leaking in/out
- CSS custom properties (--variables): the ONLY way to style across Shadow DOM
- :host selector: styles component's root element from inside
- SLDS classes: use directly in LWC templates — work inside Shadow DOM
- Lightning Web Security: Salesforce's stricter implementation of Shadow DOM
🎤 One-Line Answer
"Shadow DOM encapsulates CSS per component — no style leaking in or out. Use CSS custom properties (--variables) to pass styling values across Shadow DOM boundaries into child components."
Q67
How do you call Apex imperatively in LWC vs using @wire?
🌍 Real World Example
import { LightningElement, wire } from 'lwc';
import getAccounts from '@salesforce/apex/AccountController.getAccounts';
import saveAccount from '@salesforce/apex/AccountController.saveAccount';
import { ShowToastEvent } from 'lightning/platformShowToastEvent';
import { refreshApex } from '@salesforce/apex';
export default class AccountManager extends LightningElement {
wiredAccountsResult; // Store wire result for refreshApex
// @wire: AUTO-CALLS on component load, reactive to parameter changes
@wire(getAccounts, { type: 'Customer' })
wiredAccounts(result) {
this.wiredAccountsResult = result;
if(result.data) this.accounts = result.data;
}
// IMPERATIVE: user-triggered, full control
async handleSave() {
this.isLoading = true;
try {
await saveAccount({ accountData: this.formData });
this.dispatchEvent(new ShowToastEvent({ title: 'Saved!', variant: 'success' }));
refreshApex(this.wiredAccountsResult); // Refresh wire after save
} catch(error) {
this.dispatchEvent(new ShowToastEvent({
title: 'Error', message: error.body.message, variant: 'error'
}));
} finally {
this.isLoading = false;
}
}
}
🔑 Key Points for Interviewer
- @wire: auto-calls on load, reactive to $ params, returns {data, error}
- Imperative: Promise-based, on-demand, user-triggered
- refreshApex(): refreshes a specific @wire result after imperative changes
- async/await: cleaner imperative code than .then().catch()
- @wire for reads; imperative for writes and user-triggered reads
🎤 One-Line Answer
"@wire auto-calls Apex reactively on load and param changes. Imperative Apex is Promise-based and user-triggered — use @wire for reads, imperative for writes; refreshApex() to sync wire after imperative changes."
Q68
What is NavigationMixin in LWC and how do you use it?
🌍 Real World Example
import { LightningElement } from 'lwc';
import { NavigationMixin } from 'lightning/navigation';
export default class NavigationDemo extends NavigationMixin(LightningElement) {
// Navigate to a specific record page
viewRecord(recordId) {
this[NavigationMixin.Navigate]({
type: 'standard__recordPage',
attributes: { recordId: recordId, actionName: 'view' }
});
}
// Navigate to object list view
viewAllAccounts() {
this[NavigationMixin.Navigate]({
type: 'standard__objectPage',
attributes: { objectApiName: 'Account', actionName: 'list' }
});
}
// Navigate to external URL
openSFInterviewPro() {
this[NavigationMixin.Navigate]({
type: 'standard__webPage',
attributes: { url: 'https://www.sfinterviewpro.com' }
});
}
// Get URL without navigating (for href attributes)
async getRecordUrl(recordId) {
const url = await this[NavigationMixin.GenerateUrl]({
type: 'standard__recordPage',
attributes: { recordId: recordId, actionName: 'view' }
});
return url;
}
}
🔑 Key Points for Interviewer
- NavigationMixin: apply to class — NavigationMixin(LightningElement)
- Navigate types: standard__recordPage, standard__objectPage, standard__webPage, standard__namedPage
- GenerateUrl: returns URL string without navigating
- Works in LEX, Experience Cloud, Salesforce Mobile App
- actionName values: view, edit, new, list, home
🎤 One-Line Answer
"NavigationMixin(LightningElement) enables programmatic navigation — this[NavigationMixin.Navigate]({type, attributes}) for records, lists, or external URLs; GenerateUrl for URL string without navigating."
Q69
What are getters in LWC and why are they preferred over inline template expressions?
🌍 Real World Example
import { LightningElement, wire } from 'lwc';
import getOpportunities from '@salesforce/apex/OppController.getOpportunities';
export default class OpportunityList extends LightningElement {
@wire(getOpportunities) opps;
// Getters — computed properties, auto-reactive, testable
get hasData() {
return this.opps?.data?.length > 0; // Optional chaining prevents null errors
}
get totalValue() {
if(!this.opps?.data) return '£0';
const total = this.opps.data.reduce((sum, opp) => sum + (opp.Amount || 0), 0);
return '£' + total.toLocaleString();
}
get sortedByAmount() {
return this.opps?.data ? [...this.opps.data].sort((a,b) => b.Amount - a.Amount) : [];
}
get isLoading() { return !this.opps; }
get hasError() { return !!this.opps?.error; }
get errorMessage() { return this.opps?.error?.body?.message; }
}
// Template: <template if:true={hasData}>
// <p>Total: {totalValue}</p>
// <template for:each={sortedByAmount} for:item="opp">
🔑 Key Points for Interviewer
- LWC templates only support simple property references — no inline logic
- Getters auto-recompute when underlying reactive properties change
- Optional chaining (?.) prevents null reference errors cleanly
- Keep getters pure — no side effects, no DML, just computation
- Unit testable: call getter directly in test and assert on computed value
🎤 One-Line Answer
"Getters compute values in JavaScript instead of templates — LWC templates don't support complex expressions. Getters auto-recompute when reactive properties change and are independently unit-testable."
Q70
How do you handle errors in LWC Apex calls?
🌍 Real World Example
import { LightningElement } from 'lwc';
import { ShowToastEvent } from 'lightning/platformShowToastEvent';
import saveData from '@salesforce/apex/DataService.saveData';
export default class ErrorHandlingDemo extends LightningElement {
errorMessage;
// Helper to extract meaningful error message
getErrorMessage(error) {
if(Array.isArray(error.body)) {
return error.body.map(e => e.message).join('\n');
}
if(typeof error.body?.message === 'string') {
return error.body.message;
}
if(error.body?.output?.errors) {
return error.body.output.errors.map(e => e.message).join('\n');
}
return 'Unknown error occurred';
}
async handleSave() {
try {
await saveData({ data: this.formData });
this.dispatchEvent(new ShowToastEvent({
title: 'Success', message: 'Record saved!', variant: 'success'
}));
this.errorMessage = null;
} catch(error) {
this.errorMessage = this.getErrorMessage(error);
this.dispatchEvent(new ShowToastEvent({
title: 'Save Failed', message: this.errorMessage, variant: 'error', mode: 'sticky'
}));
}
}
}
🔑 Key Points for Interviewer
- error.body.message: single error string
- error.body.output.errors: validation/field-level errors array
- ShowToastEvent: import from lightning/platformShowToastEvent
- mode: 'sticky' keeps toast visible until dismissed (for important errors)
- @wire errors: check wiredResult.error in getter/template
🎤 One-Line Answer
"Extract error.body.message or error.body.output.errors in try/catch, display via ShowToastEvent — always show user-friendly messages, never raw error objects; use 'sticky' mode for critical errors."
Q71
What is lightning-datatable and what are its key features?
🌍 Real World Example
// Component HTML
// <lightning-datatable key-field="Id" data={opportunities} columns={columns}
// sorted-by={sortedBy} sorted-direction={sortedDirection}
// onsort={handleSort} onrowaction={handleAction}
// enable-infinite-loading onloadmore={loadMore}></lightning-datatable>
// JavaScript
columns = [
{ label: 'Name', fieldName: 'Name', type: 'url',
typeAttributes: { label: { fieldName: 'Name' }, target: '_blank' }, sortable: true },
{ label: 'Amount', fieldName: 'Amount', type: 'currency', sortable: true },
{ label: 'Stage', fieldName: 'StageName', type: 'text', sortable: true },
{ label: 'Close Date', fieldName: 'CloseDate', type: 'date-local', sortable: true },
{ type: 'action', typeAttributes: { rowActions: this.getRowActions() } }
];
getRowActions() {
return [{ label: 'View', name: 'view' }, { label: 'Edit', name: 'edit' }];
}
handleAction(event) {
const { action, row } = event.detail;
if(action.name === 'view') this.navigateToRecord(row.Id);
}
🔑 Key Points for Interviewer
- key-field: required unique identifier — use Id
- Column types: text, number, currency, percent, date, url, email, boolean, button, action
- sortable: true on columns — wire up onsort handler
- Inline editing: editable: true on columns + onsave handler
- Custom cell rendering not possible — use custom HTML table for that
🎤 One-Line Answer
"lightning-datatable provides sorting, inline editing, row selection, and row actions out of the box — use it for standard tabular data; build custom HTML tables only when cell-level component rendering is needed."
Q72
What is the @salesforce/schema import and why should you use it?
🌍 Real World Example
import { LightningElement, wire, api } from 'lwc';
import { getRecord, getFieldValue } from 'lightning/uiRecordApi';
// Schema imports - validated at compile time
import ACCOUNT_NAME from '@salesforce/schema/Account.Name';
import ACCOUNT_INDUSTRY from '@salesforce/schema/Account.Industry';
import ACCOUNT_REVENUE from '@salesforce/schema/Account.AnnualRevenue';
import ACCOUNT_OBJECT from '@salesforce/schema/Account';
export default class AccountDetail extends LightningElement {
@api recordId;
@wire(getRecord, {
recordId: '$recordId',
fields: [ACCOUNT_NAME, ACCOUNT_INDUSTRY, ACCOUNT_REVENUE]
})
account;
get name() { return getFieldValue(this.account.data, ACCOUNT_NAME); }
get industry() { return getFieldValue(this.account.data, ACCOUNT_INDUSTRY); }
}
// If Account.Name is renamed in org → compilation fails immediately
// vs hardcoded 'Account.Name' string → fails at runtime when user loads component
🔑 Key Points for Interviewer
- @salesforce/schema: compile-time validation of object/field API names
- Typos fail build, not runtime — catches errors before deployment
- getFieldValue(): safely extracts field values from wire results
- Use with lightning/uiRecordApi for standard CRUD without Apex
- Much safer than hardcoded strings like 'Account.Name' in wire adapters
🎤 One-Line Answer
"@salesforce/schema imports validate object/field API names at compile time — typos fail the build before deployment instead of failing at runtime when users load the component."
Q73
What is lightning-record-edit-form and when do you prefer it over custom Apex?
🌍 Real World Example
<!-- lightning-record-edit-form - zero Apex, respects FLS and validation rules -->
<lightning-record-edit-form
record-id={recordId}
object-api-name="Opportunity"
onsuccess={handleSuccess}
onerror={handleError}
onsubmit={handleSubmit}>
<lightning-messages></lightning-messages> <!-- shows form-level errors -->
<lightning-input-field field-name="Name"></lightning-input-field>
<lightning-input-field field-name="Amount"></lightning-input-field>
<lightning-input-field field-name="StageName"></lightning-input-field>
<lightning-input-field field-name="CloseDate"></lightning-input-field>
<lightning-button type="submit" label="Save"></lightning-button>
</lightning-record-edit-form>
// onsuccess: fires with created/updated record
// onerror: fires with validation/server errors
// onsubmit: fires before submission — intercept to modify fields
🔑 Key Points for Interviewer
- Auto-respects FLS — hidden fields don't appear, read-only fields can't be edited
- Auto-runs validation rules — no extra code needed
- lightning-input-field renders correct UI control per field type (picklist, date, etc.)
- lightning-record-view-form: read-only version
- Use custom Apex when: multi-object saves, complex pre-save logic, non-standard behavior
🎤 One-Line Answer
"lightning-record-edit-form handles FLS, validation rules, and field type rendering automatically with zero Apex — use custom Apex only for multi-object saves or complex pre-save business logic."
Q74
How do you load a third-party JavaScript library in LWC?
🌍 Real World Example
import { LightningElement } from 'lwc';
import { loadScript, loadStyle } from 'lightning/platformResourceLoader';
import CHART_JS from '@salesforce/resourceUrl/ChartJS'; // Static Resource
export default class ChartComponent extends LightningElement {
chartInitialized = false;
renderedCallback() {
if(this.chartInitialized) return;
this.chartInitialized = true;
Promise.all([
loadScript(this, CHART_JS + '/Chart.min.js'),
loadStyle(this, CHART_JS + '/Chart.min.css')
])
.then(() => {
this.initializeChart();
})
.catch(error => {
console.error('Failed to load Chart.js:', error);
});
}
initializeChart() {
const ctx = this.template.querySelector('canvas').getContext('2d');
new window.Chart(ctx, {
type: 'bar',
data: this.chartData,
options: { responsive: true }
});
}
}
🔑 Key Points for Interviewer
- Upload library as Static Resource (zip if multiple files)
- loadScript/loadStyle: from lightning/platformResourceLoader
- Load in renderedCallback with isInitialized guard
- Promise.all: load multiple files simultaneously
- Some libraries break under Lightning Web Security — check Locker-compatibility
🎤 One-Line Answer
"Upload library as Static Resource → use loadScript/loadStyle from lightning/platformResourceLoader in renderedCallback with isInitialized guard — Promise.all for loading multiple files simultaneously."
Q75
What is the difference between imperative Apex and @wire when to use each?
✅ Direct Answer
@wire: auto-calls on component load, reactive to parameter changes ($ prefix), cacheable=true supported, best for data that loads with the component. Imperative: user-triggered (button click), can call multiple times with same params, no caching by default, required for DML operations (non-cacheable methods).
🔑 Key Points for Interviewer
- @wire works only with @AuraEnabled(cacheable=true) Apex methods
- Imperative works with both cacheable and non-cacheable methods
- DML Apex (insert/update/delete) MUST be called imperatively — not via @wire
- refreshApex(): refresh a specific @wire result after imperative DML
- Both can be used in same component for different purposes
🎤 One-Line Answer
"@wire for reactive data that auto-loads with the component (cacheable=true). Imperative for user-triggered actions, DML operations, and cases where you need to call the same method multiple times."
🔄 Section 6 — Flows & Automation
Q76–Q85 · Intermediate · Modern Automation Platform
Q76
What are the different types of Flows in Salesforce?
✅ Direct Answer
Record-Triggered Flow: auto-fires on record create/update/delete — Before Save (fast, no DML cost) or After Save (full DML, related record operations). Screen Flow: user-facing guided process. Scheduled Flow: time-based automation. Auto-launched Flow: programmatically called from Apex, Processes, or REST. Platform Event-Triggered: reacts to platform events.
🌍 Real World Example
Record-Triggered Before Save: when Opportunity saved, auto-set Description = "Created by " + Owner.Name (no DML, fast). Record-Triggered After Save: when Stage = Closed Won, create 5 onboarding Tasks and send email. Screen Flow: 4-step wizard embedded in LWC for guided quote creation. Scheduled: every Monday 8 AM, update all stale Leads. Auto-launched: called from Apex batch per Account for complex field updates.
🔑 Key Points for Interviewer
- Before Save: modifies triggering record only, no DML cost, runs before after triggers
- After Save: full DML, related records, runs after commit
- Screen Flow: embeddable in LWC, pages, communities via <lightning-flow>
- Scheduled: not guaranteed exact timing — use Scheduled Apex for mission-critical
- All flow types can call Invocable Methods for Apex integration
🎤 One-Line Answer
"Five Flow types: Record-Triggered (Before/After Save), Screen (user UI), Scheduled (time-based), Auto-launched (programmatic), Platform Event — choose based on what initiates the automation."
Q77
What is $Record vs $Record__Prior in Record-Triggered Flows?
✅ Direct Answer
$Record contains the current field values after the user's changes. $Record__Prior contains the field values before the change — only available in update-triggered flows, null for create flows. Compare both to detect specific field changes and prevent unnecessary or recursive flow execution.
🌍 Real World Example
Opportunity Stage Change Flow: Entry condition = StageName changed (StageName != $Record__Prior.StageName). Flow only fires when Stage actually changes — not on every Opportunity save. Inside flow: Decision element checks "Is New Stage = Closed Won?" → Yes → create 5 onboarding tasks. Without $Record__Prior, the flow fires and creates duplicate tasks every time the Opportunity is saved for any reason.
🔑 Key Points for Interviewer
- $Record: current values (after user's change)
- $Record__Prior: values before update (null on create flows)
- CHANGED operator: shorthand for field != $Record__Prior.field
- Essential for recursion prevention in Record-Triggered Flows
- Also prevents duplicate processing when unrelated fields change
🎤 One-Line Answer
"$Record = current values. $Record__Prior = pre-change values (update only). Compare them in entry conditions to fire Flow only when specific fields change — not on every record save."
Q78
How do you handle errors (faults) in Flows?
✅ Direct Answer
Connect a Fault Connector from any DML or action element to an error-handling path. Best practices: store {!$Flow.FaultMessage} in a variable, create a Flow_Error_Log__c record for audit trail, send admin email notification, and show a user-friendly message in Screen Flows. Without fault connectors, Flow errors display raw technical messages to users.
🌍 Real World Example
Auto-launched Flow creating related Task records: Fault connector → Assignment (stores fault message) → Create Flow_Error_Log__c (stores: error message, record ID, user, timestamp, flow name) → Send Email to Salesforce Admin. When Apex catches the flow exception, it can also check the error log. Accenture mandates fault paths on all production flows — no unhandled errors reaching users.
🔑 Key Points for Interviewer
- {!$Flow.FaultMessage}: built-in variable with the error text
- Fault connector: red path from any element that can fail
- Flow_Error_Log__c: custom object to log flow errors with context
- Screen Flow fault path: show friendly message, not technical error
- Auto-launched fault path: always log and notify — user won't see the error
🎤 One-Line Answer
"Connect Fault Connectors to error-handling paths — log {!$Flow.FaultMessage} to an error object, notify admins, show user-friendly messages. Unconnected faults expose raw errors to users."
Q79
When do you choose Flow over Apex and vice versa?
✅ Direct Answer
Choose Flow when: simple-moderate logic, standard CRUD operations, admin/business-user maintainability required, no external API callouts needed, guided user processes. Choose Apex when: complex business logic, external HTTP callouts, batch processing millions of records, custom REST endpoints, performance-critical operations, complex error handling with retry logic.
💡 Why?
This question shows consulting maturity at Accenture. Flow = lower maintenance cost, business-user modifiable, faster to build, no deployment required for changes. Apex = more powerful, better performance, requires developer and deployment. Wrong choice costs the client money in ongoing maintenance.
🌍 Real World Example
Case escalation (send email + update priority + create task): Flow — business user can modify rules without developer. Nightly sync of 5M records from SAP: Apex Batch — Flow can't handle millions efficiently. Complex CPQ pricing with 50 rules + tax API callout: Apex — Flow would be unmaintainable at that complexity. Accenture principle: Flow first, Apex only when Flow genuinely can't handle it.
🔑 Key Points for Interviewer
- Flow: declarative, admin-maintainable, no deployment for changes
- Apex: programmatic, developer-required, more powerful
- Mixing: Flow calls Apex via @InvocableMethod when Flow needs Apex power
- Flow limits: 2000 elements, transaction limits still apply
- Accenture: Flow-first principle — Apex only when Flow genuinely can't handle it
🎤 One-Line Answer
"Flow for simple/moderate logic that admins maintain. Apex for complex logic, external callouts, and batch volumes. Accenture principle: Flow first — reach for Apex only when Flow genuinely can't handle the requirement."
Q80
What is an Approval Process in Salesforce and how does it differ from a Flow?
✅ Direct Answer
Approval Process: structured human decision-making workflow — record submitted for approval, locked from editing, routed to approvers who approve/reject, actions fire on approval or rejection. Flow: automated without human approval. Use Approval Process when business requires explicit human sign-off at defined stages before records proceed.
🌍 Real World Example
Discount Approval: Sales Rep submits Opportunity with >20% discount. Step 1: Sales Manager approves (up to 30%). Step 2: VP approves (up to 50%). Record locked — no edits during approval. On Approval: Flow fires to update Stage, send welcome email. On Rejection: Flow fires to reset discount and email rep with rejection reason. Approval Process handles human decision; Flow handles automation consequences.
🔑 Key Points for Interviewer
- Approval Process: human decision, multi-step, record locking, defined chain
- Flow: automated, no human approval gate, faster
- Combine: Approval Process triggers a Flow on approval/rejection
- Delegated Approvers: backup approver when primary is unavailable
- Process.submit() / .approve() / .reject(): Apex methods for programmatic approval
🎤 One-Line Answer
"Approval Process requires explicit human sign-off with record locking. Flow automates without human gates — combine them: Approval Process triggers Flows on approval and rejection for post-decision automation."
Q81
What is Flow Orchestration in Salesforce?
✅ Direct Answer
Flow Orchestration coordinates multiple Flows as stages and steps in a larger business process — sequential steps (one completes before next starts), parallel steps (multiple run simultaneously), and interactive steps (waits for a user to complete a Screen Flow). It's a parent Flow managing child Flows as orchestrated work items.
🌍 Real World Example
New Client Onboarding Orchestration: Stage 1 (sequential) — Legal signs NDA. Stage 2 (parallel) — IT provisions access AND Sales creates account plan simultaneously. Stage 3 (interactive) — Client Success Manager completes onboarding checklist Screen Flow. Stage 4 (sequential) — Welcome email sent automatically. All coordinated without custom Apex, each stage visible in the Orchestration record UI.
🔑 Key Points for Interviewer
- Orchestration: parent that coordinates child flows as stages/steps
- Sequential: step N must complete before step N+1 starts
- Parallel: multiple steps run simultaneously — faster overall completion
- Interactive: waits for user to complete an assigned Screen Flow
- Work Guide: Orchestration creates Work Item records for user-assigned steps
🎤 One-Line Answer
"Flow Orchestration coordinates multiple Flows as sequential, parallel, or interactive steps — complex multi-stage business processes without custom Apex coordination code."
Q82
What is a Scheduled Flow and what are its limitations compared to Scheduled Apex?
✅ Direct Answer
Scheduled Flow runs on a defined schedule via Flow Builder — no code needed, easy setup. Limitations vs Scheduled Apex: not guaranteed to run at exact time (can be delayed if org over limits), not suitable for high record volumes (use Batch Apex), minimum 1-hour frequency, can pause during Salesforce maintenance windows.
🌍 Real World Example
Scheduled Flow: every Sunday midnight, find all Leads inactive >30 days, update Status = "Re-engage," create Task. Simple, no code, admin can modify. Bank's nightly statement run processing 5M records exactly at 2:00 AM: Scheduled Apex Batch (guaranteed timing, handles millions, each chunk gets fresh limits). Rule: Scheduled Flow for simple automation; Scheduled Apex for volume, exactness, and enterprise reliability.
🔑 Key Points for Interviewer
- Scheduled Flow: no code, admin-friendly, good for thousands of records
- Scheduled Apex: code required, handles millions, guaranteed (mostly) timing
- Min frequency for Scheduled Flow: hourly (not sub-hourly)
- Scheduled Flow pauses during limits or maintenance — not guaranteed
- Monitor: Setup → Scheduled Jobs for upcoming Scheduled Apex runs
🎤 One-Line Answer
"Scheduled Flow is no-code time-based automation — but pauses under limits and isn't designed for millions of records. Use Scheduled Apex Batch for mission-critical timing, high volumes, or sub-hourly frequency."
Q83
How do you prevent recursion in a Record-Triggered Flow?
✅ Direct Answer
Use Entry Conditions with $Record vs $Record__Prior comparison — the Flow only fires when specific fields actually change, not on every save. Also set "Trigger When: Record is Updated AND" specific field changed condition. This prevents the Flow from re-firing when its own DML updates a different field on the same record.
🌍 Real World Example
Flow on Account update: sets Description based on Industry change. Without recursion prevention: Flow fires → updates Description → Account saves → Flow fires again → updates Description → infinite loop → "Maximum flow trigger depth exceeded" error. Fix: Entry Condition = Industry CHANGED (Industry != $Record__Prior.Industry). Flow only fires on Industry changes — Description updates don't trigger re-entry.
🔑 Key Points for Interviewer
- Primary prevention: Entry Conditions with CHANGED operator or != $Record__Prior
- Flow Builder also has "Run Once Per Record" option for some scenarios
- Recursion error message: "Maximum flow trigger depth exceeded"
- Different from Apex: Apex uses static Boolean; Flow uses entry conditions
- Also: Salesforce limits flow re-entry depth to prevent runaway recursion
🎤 One-Line Answer
"Prevent Flow recursion with $Record__Prior comparison in entry conditions — CHANGED operator ensures Flow only fires when the specific field actually changes, not when the Flow's own updates re-trigger it."
Q84
What are the key elements available in Flow Builder?
✅ Direct Answer
Core elements: Screen (user interface), Record (Create/Update/Get/Delete), Action (call Invocable Method, send email, post to Chatter), Decision (if/else branching), Loop (iterate collection), Assignment (set variable values), Subflow (call another Flow), Wait (time-based pause for scheduled paths), Transform (data mapping).
🌍 Real World Example
Complex order processing Flow: Get Record (fetch Account data) → Decision (is Account active?) → Loop (iterate over Order Line Items) → Assignment (calculate totals) → Action (call @InvocableMethod for tax calculation) → Create Records (create Invoice) → Screen (confirm to user) → Subflow (trigger fulfillment flow). Each element has a specific role — knowing them all is what Accenture tests.
🔑 Key Points for Interviewer
- Get Records: SOQL query equivalent in Flow
- Decision: if/else with multiple outcome paths
- Loop: forEach equivalent — iterate collections
- Transform: map fields between objects without Assignment elements
- Wait: pause until date/time or platform event — used in time-based logic
🎤 One-Line Answer
"Key Flow elements: Screen (UI), Get/Create/Update/Delete Records (DML), Decision (branching), Loop (iteration), Assignment (variables), Action (Apex/email), Subflow (reuse), Wait (time-based) — each has a specific purpose."
Q85
What is a subflow and when would you use it?
✅ Direct Answer
A Subflow element calls another Auto-launched Flow from within a parent Flow — passing input variables and receiving output variables. Use subflows to: reuse common flow logic across multiple parent flows, break complex flows into maintainable modules, and allow shared logic to be updated in one place without modifying each parent flow.
🌍 Real World Example
Notification_Subflow sends email + Chatter post + creates Task. 5 different parent flows all call this subflow when completing their processes — Case close flow, Opportunity win flow, Lead convert flow, Contract activate flow, Order complete flow. When notification template changes, update Notification_Subflow once. Without subflows: update the notification logic in 5 separate flows.
🔑 Key Points for Interviewer
- Subflow: calls Auto-launched Flow — not Record-Triggered or Screen directly
- Input variables: pass data into subflow from parent
- Output variables: receive data back from subflow
- DRY principle: don't repeat yourself — shared logic in subflow
- Flow Orchestration uses subflows as its steps
🎤 One-Line Answer
"Subflow calls a reusable Auto-launched Flow with input/output variables — apply the DRY principle: shared logic lives in one subflow, multiple parent flows call it, one update propagates everywhere."
🔗 Section 7 — Integration & APIs
Q86–Q100 · Advanced · Accenture's Strongest Interview Focus
Q86
What is the difference between REST API and SOAP API in Salesforce?
✅ Direct Answer
REST API: lightweight, JSON/XML, HTTP methods (GET/POST/PUT/PATCH/DELETE), stateless, simpler to use, modern standard. SOAP API: XML-based, WSDL contract, more complex, enterprise-legacy standard. Salesforce recommends REST for all new integrations — higher per-call limits, easier client libraries, faster development.
🌍 Real World Example
REST API: Mobile app fetching Account data — GET /services/data/v60.0/sobjects/Account/{id} — simple, fast, JSON response, works in any language. SOAP API: Legacy SAP integration from 2012 — uses WSDL-generated Java client because SAP's middleware only speaks SOAP. Accenture's new integrations all use REST via MuleSoft; SOAP only for maintaining legacy client systems that can't be changed.
🔑 Key Points for Interviewer
- REST: /services/data/v60.0/ — JSON, HTTP verbs, modern
- SOAP: /services/Soap/u/ — XML, WSDL, enterprise legacy
- Bulk API 2.0: REST-based for millions of records asynchronously
- Streaming API (CometD): real-time push via long-polling
- Composite API: multiple REST calls in one HTTP request
🎤 One-Line Answer
"REST: lightweight JSON over HTTP — modern, simple, recommended. SOAP: XML with WSDL contract — complex but enterprise-standard. Accenture uses REST for all new integrations; SOAP only for legacy systems that require it."
Q87
What are Named Credentials and why must you use them for callouts?
✅ Direct Answer
Named Credentials store external endpoint URLs and authentication details (OAuth tokens, API keys, certificates) securely in Salesforce metadata — encrypted, not visible in code. Callouts reference them as 'callout:CredentialName'. They auto-handle OAuth token refresh, require no Remote Site Settings, and keep credentials out of source code and version control.
🌍 Real World Example
// WITHOUT Named Credentials - SECURITY RISK
HttpRequest req = new HttpRequest();
req.setEndpoint('https://api.erp.com/orders');
req.setHeader('Authorization', 'Bearer hardcoded_token_here'); // In GitHub = security breach!
// WITH Named Credentials - SECURE
HttpRequest req = new HttpRequest();
req.setEndpoint('callout:ERP_Integration/orders');
req.setMethod('GET');
// No credentials in code. Salesforce resolves authentication automatically.
// Token refresh handled automatically for OAuth credentials.
🔑 Key Points for Interviewer
- Named Credentials: encrypted, deployable with metadata, not in code
- Auto-handles OAuth token refresh — no manual token management
- No Remote Site Settings needed — endpoints auto-whitelisted
- External Credentials: newer pattern — reusable auth across multiple Named Credentials
- Accenture mandates Named Credentials for ALL external callouts — no exceptions
🎤 One-Line Answer
"Named Credentials store endpoints + auth securely in Salesforce metadata — never hardcode credentials in Apex. Referenced as 'callout:Name', auto-refreshes OAuth tokens, requires no Remote Site Settings."
Q88
Why can't you make HTTP callouts directly from Apex triggers? How do you solve this?
✅ Direct Answer
Triggers execute in DML context — Salesforce throws "Callout not allowed after uncommitted work" if you try to make an HTTP callout while a DML transaction is open. Solution: use @future(callout=true) or Queueable with Database.AllowsCallouts — these async methods run in a separate transaction after the DML commits.
🌍 Real World Example
// WRONG - Callout in trigger
trigger OrderTrigger on Order__c (after insert) {
for(Order__c o : Trigger.new) {
Http h = new Http(); // ERROR: Uncommitted work pending
}
}
// CORRECT - Delegate to @future
trigger OrderTrigger on Order__c (after insert) {
List<Id> orderIds = new List<Id>(Trigger.newMap.keySet());
ERPService.syncOrders(orderIds); // Calls @future method
}
public class ERPService {
@future(callout=true)
public static void syncOrders(List<Id> orderIds) {
List<Order__c> orders = [SELECT Id, Name, Amount__c FROM Order__c WHERE Id IN :orderIds];
// Make HTTP callout here safely
HttpRequest req = new HttpRequest();
req.setEndpoint('callout:ERP_System/orders');
req.setMethod('POST');
req.setBody(JSON.serialize(orders));
new Http().send(req);
}
}
🔑 Key Points for Interviewer
- DML context = open transaction = no callouts allowed
- @future(callout=true): runs after transaction commits, separate context
- Queueable + Database.AllowsCallouts: more powerful, chainable
- Max 10 callouts per transaction, 120 second timeout
- Platform Events + subscriber trigger: alternative decoupled pattern
🎤 One-Line Answer
"Triggers can't make callouts — DML transaction is open. Use @future(callout=true) or Queueable with AllowsCallouts to move callouts to async context after the DML transaction commits."
Q89
What is the Composite API and when would you use it?
✅ Direct Answer
Composite API combines up to 25 REST API requests into one HTTP call — sequential execution with referenceId chaining (use output of step 1 as input to step 2). Reduces round trips, saves API quota, and ensures related record creation happens atomically. allOrNone=true makes it transactional.
🌍 Real World Example
Mobile app new customer flow: Create Account → Create Contact linked to Account → Create Opportunity linked to both → Create Task linked to Opportunity. Old approach: 4 separate API calls, 4 round trips, risk of partial failure. Composite API: 1 HTTP call, referenceId chaining (@{Account.id} in Contact's AccountId), allOrNone=true ensures all 4 succeed or all roll back. API quota: 4 calls → 1 call. Latency: 800ms → 200ms.
🔑 Key Points for Interviewer
- Composite: sequential, up to 25 subrequests, referenceId chaining
- Composite Batch: parallel independent requests, up to 25
- sObject Collections: bulk DML up to 200 records in one call
- allOrNone: true = transactional (all or nothing)
- Endpoint: POST /services/data/v60.0/composite
🎤 One-Line Answer
"Composite API bundles 25 API calls in one HTTP request with referenceId chaining — creates related records in one round trip, saving API quota and latency. allOrNone=true makes it transactional."
Q90
What is the Bulk API 2.0 and when do you use it over REST API?
✅ Direct Answer
Bulk API 2.0 processes large data volumes (millions of records) asynchronously via CSV job — create job, upload CSV, poll for results. Use for 10,000+ records. REST API handles small real-time operations (up to 200 records per sObject Collections call). Data Loader uses Bulk API 2.0 under the hood.
🌍 Real World Example
Daily product catalog update: 300,000 Product2 records from ERP. REST API approach: 300,000 ÷ 200 = 1,500 individual API calls — burns daily quota and takes 30 minutes. Bulk API 2.0: 1 job creation + CSV upload → async processing → 1 status poll = 3 API calls total. Salesforce optimizes batch DB operations internally. Standard daily Bulk API limit: 150 million records. Accenture uses Bulk API for all data migrations.
🔑 Key Points for Interviewer
- Operations: insert, update, upsert, delete, hardDelete, query
- Async: create job → upload CSV → poll → download results
- 150M records/day standard Salesforce limit
- Data Loader: uses Bulk API 2.0 under the hood
- REST for real-time small ops; Bulk API for nightly/weekly large batch
🎤 One-Line Answer
"Bulk API 2.0 processes millions of records asynchronously via CSV jobs — use for 10,000+ records to save API quota. REST API is for real-time small operations; Bulk API is for enterprise data migrations."
Q91
What are the different Salesforce integration patterns and when do you use each?
✅ Direct Answer
Five patterns: Request-Reply (Salesforce calls external, waits for response — synchronous). Fire-and-Forget (Salesforce sends, doesn't wait — async). Batch Data Sync (scheduled large volume exchange — ETL). Remote Call-In (external system calls Salesforce REST/SOAP). Data Virtualisation (query external data via External Objects without importing).
🌍 Real World Example
Request-Reply: Credit score lookup — button → Salesforce calls bureau API → waits → displays score in 2 seconds. Fire-and-Forget: Order created → async callout to shipping system (don't block user). Batch Sync: nightly product catalog from ERP via Bulk API. Remote Call-In: website contact form → REST API creates Lead. Data Virtualisation: SAP customer data shown on Account via External Objects — no data import, always current.
🔑 Key Points for Interviewer
- Request-Reply: synchronous, user waits, real-time data needed
- Fire-and-Forget: async, user doesn't wait, @future or Queueable
- Batch Sync: high volume, time tolerance, Scheduled Apex + Bulk API
- Remote Call-In: REST/SOAP endpoints, Connected App + OAuth
- Salesforce Connect: External Objects for real-time external data without import
🎤 One-Line Answer
"Five patterns: Request-Reply (sync wait), Fire-and-Forget (async notify), Batch Sync (scheduled volumes), Remote Call-In (external initiates), Data Virtualisation (query external without import) — choose by latency tolerance and volume."
Q92
What is MuleSoft and how does it fit into Accenture's integration architecture?
✅ Direct Answer
MuleSoft is Salesforce's enterprise integration platform (iPaaS) that connects any system — ERP, databases, legacy systems, APIs — via a hub-and-spoke architecture. It handles data transformation between protocols, error handling, retry logic, API management, and monitoring. Accenture is one of the largest MuleSoft implementation partners globally.
🌍 Real World Example
Accenture pharma client: Salesforce (CRM) + SAP ERP + Oracle HR + Data Warehouse + 3 legacy systems. MuleSoft Anypoint Platform at the center. New Opportunity in Salesforce → MuleSoft picks up via CDC event → transforms Salesforce format to SAP IDOC format → creates SAP quote → syncs pricing back to Salesforce → updates Data Warehouse. All orchestrated by MuleSoft — no direct Salesforce-to-SAP coupling.
🔑 Key Points for Interviewer
- Anypoint Platform: MuleSoft's development and management environment
- Mule Runtime: executes integration flows (Mule Applications)
- API-led connectivity: Experience API → Process API → System API layers
- Native Salesforce Connector: out-of-the-box CRUD, bulk, streaming support
- Accenture uses MuleSoft for all large enterprise integration projects
🎤 One-Line Answer
"MuleSoft is Salesforce's enterprise integration hub — connects any system via API-led connectivity (Experience/Process/System APIs). Accenture uses it for all large integration projects involving SAP, Oracle, and legacy systems."
Q93
What is the access token vs refresh token in Salesforce OAuth?
✅ Direct Answer
Access Token: short-lived credential (1-2 hours) sent in every API call as Bearer token in Authorization header — grants actual API access. Refresh Token: long-lived credential used ONLY to obtain a new access token when the current one expires — never sent in API calls, only to the token endpoint to refresh.
🌍 Real World Example
MuleSoft integration: Initial auth → receives access token (1 hour) + refresh token. All API calls: "Authorization: Bearer {accessToken}". At 55 minutes, MuleSoft proactively calls token endpoint with refresh token → receives new access token. Loop continues indefinitely without user re-authentication. If access token is intercepted by attacker → expires in max 2 hours. Revoke refresh token in Connected App → all access immediately terminated.
🔑 Key Points for Interviewer
- Access Token: short-lived, in every API call, damage-limited if stolen
- Refresh Token: long-lived, only to token endpoint, store securely
- Refresh endpoint: POST /services/oauth2/token grant_type=refresh_token
- JWT Bearer: no refresh token — generates new access token via certificate
- Revoke: POST /services/oauth2/revoke to invalidate immediately
🎤 One-Line Answer
"Access token: short-lived (1-2 hrs), in every API call. Refresh token: long-lived, only exchanges for new access token at token endpoint. If access token is stolen, it expires in hours; revoke refresh token for immediate termination."
Q94
What is Salesforce Connect and External Objects?
✅ Direct Answer
Salesforce Connect enables real-time access to data stored in external systems via External Objects (__x suffix) — data is never imported into Salesforce, always queried live from the source. Supported adapters: OData 2.0/4.0, Cross-org (Salesforce to Salesforce), Custom (Apex-based). External Objects support SOQL, reports, and related lists.
🌍 Real World Example
SAP has 10 years of historical order data — 50M records. Importing all to Salesforce: impossible (storage cost, sync complexity). Salesforce Connect: create Order__x External Object pointing to SAP OData endpoint. Account page shows related SAP orders in real-time via related list. Sales rep sees live SAP data without leaving Salesforce. No ETL, no storage cost, always current. Accenture uses this pattern for legacy ERP data access.
🔑 Key Points for Interviewer
- External Objects: __x suffix, queried live from external system
- Data not stored in Salesforce — always real-time from source
- OData adapter: most common for ERP systems supporting OData protocol
- Cross-org adapter: connect multiple Salesforce orgs
- Limitations: no triggers, no bulk operations, subject to external system's performance
🎤 One-Line Answer
"Salesforce Connect queries external data live via External Objects (__x) — no data import, no storage cost, always real-time. Perfect for large historical datasets in ERP systems that can't or shouldn't be copied into Salesforce."
Q95
What is an outbound message in Salesforce and when do you use it?
✅ Direct Answer
Outbound Messages send SOAP XML notifications to an external endpoint when triggered by a Workflow Rule or Approval Process action. They're declarative (no Apex code), guaranteed-delivery with retry logic, and include field values at the time of the event. The external system must acknowledge receipt with a success response.
🌍 Real World Example
Legacy ERP integration from 2015: when Opportunity Stage = Closed Won, Outbound Message fires automatically to ERP endpoint with opportunity fields (amount, product, contact). ERP processes the SOAP message and returns success acknowledgment. Salesforce retries every 2 hours if no acknowledgment for up to 24 hours. No Apex code, no maintenance — still runs reliably today. Modern equivalent: Platform Event + external subscriber.
🔑 Key Points for Interviewer
- Declarative: configured in Setup → Workflow Rule actions
- SOAP XML: external endpoint must accept SOAP format
- Guaranteed delivery: retries every 2 hours for 24 hours
- Field-level control: choose which fields to include in the message
- Modern alternative: Platform Events — more flexible, not limited to SOAP
🎤 One-Line Answer
"Outbound Messages send SOAP XML notifications to external endpoints with guaranteed delivery and automatic retry — declarative, no Apex code. Modern alternative is Platform Events for more flexibility."
Q96
What is the Streaming API and how does it enable real-time integrations?
✅ Direct Answer
Streaming API delivers real-time notifications to subscribed clients using CometD long-polling protocol — clients connect once and receive events as they happen without polling. Supports: PushTopic (SOQL-based record change notifications), Platform Events, Change Data Capture, and Generic Streaming (custom channels).
🌍 Real World Example
Warehouse dashboard: traditional approach polls Salesforce API every 30 seconds — 2,880 API calls/day. Streaming API: warehouse app subscribes to Order Change Data Capture via CometD — receives notification within 2 seconds of any Order status change. Zero polling. Real-time warehouse visibility. Accenture warehouse management projects always use CDC + Streaming API for live inventory dashboards.
🔑 Key Points for Interviewer
- CometD: long-polling protocol for push notifications
- PushTopic: SOQL-based — notify when records matching query change
- Platform Events: developer-published custom events
- CDC (Change Data Capture): automatic record change events
- Durable subscribers: replay missed events up to 72 hours (CDC: 3 days)
🎤 One-Line Answer
"Streaming API pushes real-time notifications to subscribed clients via CometD — eliminates polling, supports Platform Events, CDC, and PushTopics. Receive record changes in seconds without burning API quota."
Q97
What is the difference between synchronous and asynchronous integration patterns?
✅ Direct Answer
Synchronous: caller waits for response — tight coupling, immediate feedback, user experience blocks until complete. Asynchronous: caller fires and continues — loose coupling, no waiting, resilient to external system outages. Synchronous for user-facing real-time data; asynchronous for background processing, notifications, and high-volume data exchange.
🌍 Real World Example
Synchronous: Credit check button on Account — user expects immediate result (2 seconds). Blocks UI until credit bureau responds. Asynchronous: Order created in Salesforce → Platform Event → MuleSoft picks up → sends to ERP (may take 30 seconds) → user already moved to next task. If ERP is down, event replays when it recovers. Synchronous failure = user sees error immediately. Async failure = retried silently.
🔑 Key Points for Interviewer
- Sync: immediate response, user waits, tight coupling
- Async: fire and forget, loose coupling, resilient to failures
- Accenture preference: async for integrations where possible — more resilient
- Hybrid: sync for read operations, async for write operations
- Platform Events + CDC: Salesforce's native async integration backbone
🎤 One-Line Answer
"Synchronous = caller waits, immediate feedback, tight coupling. Asynchronous = fire and continue, loose coupling, resilient. Use sync for user-facing real-time data; async for background processing and external system writes."
Q98
What is the maximum number of API calls per day in Salesforce and how do you manage this limit?
✅ Direct Answer
API call limit = 1,000 × number of user licenses per day (minimum 1,000). Enterprise Edition: typically 15,000-100,000+ per day. Manage by: using Bulk API for batch operations (1 call vs thousands), caching frequently-read data, using CDC/Streaming instead of polling, combining calls with Composite API, and monitoring via API Usage reports in Setup.
🌍 Real World Example
Client hitting API limit daily: root cause analysis showed an external app polling 10 Salesforce objects every minute — 10 × 60 × 24 = 14,400 API calls/day just for polling. Fix: replaced polling with CDC subscriptions via Streaming API. API calls dropped from 14,400 to ~50/day. Saved API quota for actual user operations. Accenture integration audits always check for polling patterns first.
🔑 Key Points for Interviewer
- Formula: 1,000 × licensed users (Enterprise), check actual in Setup → Company Info
- Bulk API calls: count differently — optimized for high volume
- Monitor: Setup → API Usage, Workbench, or custom API usage reports
- Solutions: Bulk API, CDC, caching, Composite API, reduce polling frequency
- Salesforce can grant temporary increases for peak periods
🎤 One-Line Answer
"API limit = 1,000 × licensed users/day. Manage by replacing polling with CDC/Streaming, using Bulk API for batch operations, combining calls with Composite API, and monitoring in Setup → Company Info."
Q99
What is an External Credential and how is it different from a Named Credential?
✅ Direct Answer
External Credentials store authentication details (OAuth client credentials, custom headers, certificates) separately from the endpoint URL. Named Credentials reference External Credentials for auth + define the endpoint URL. This separation enables one External Credential to be reused across multiple Named Credentials pointing to different endpoints of the same system.
🌍 Real World Example
SAP integration: one OAuth Client Credential (External Credential: SAP_OAuth) shared by three Named Credentials: SAP_Orders (orders endpoint), SAP_Inventory (inventory endpoint), SAP_HR (HR endpoint). Changing the OAuth client secret? Update one External Credential — all three Named Credentials automatically use the new secret. Old pattern: update credentials in each Named Credential separately — error-prone.
🔑 Key Points for Interviewer
- External Credential: auth only — reusable across multiple Named Credentials
- Named Credential: endpoint URL + reference to External Credential
- Principal: per-user credentials (each user has their own token)
- Introduced to solve the "same auth, different endpoints" pattern cleanly
- Legacy Named Credentials: combined auth + endpoint (still supported)
🎤 One-Line Answer
"External Credential stores auth details (OAuth, API keys) separately — multiple Named Credentials reference one External Credential. Change auth once, all Named Credentials update automatically."
Q100
How do you design a resilient integration — what happens when the external system is down?
✅ Direct Answer
Resilient integration design: 1) Async over sync (Platform Events retry automatically). 2) Dead letter queue pattern — failed events stored for manual replay. 3) Retry logic with exponential backoff in callout code. 4) Circuit breaker pattern — stop retrying if system is consistently down. 5) Error logging with alerts. 6) Idempotent operations — safe to retry without duplicate effects.
🌍 Real World Example
ERP goes down at 2 AM during nightly sync. Resilient design: Salesforce publishes Platform Events → MuleSoft subscriber fails to process → MuleSoft queues events in persistent store → alert sent to on-call engineer → ERP recovers at 4 AM → MuleSoft replays all 3 hours of queued events → all records sync without data loss. Accenture designs all integrations with this failure-tolerance pattern from day one.
🔑 Key Points for Interviewer
- Platform Events: durable, replayable (72 hours) — natural retry mechanism
- CDC: 3-day replay window — longer resilience
- Exponential backoff: 1s, 2s, 4s, 8s, 16s before retry
- Idempotent: same operation multiple times = same result (use External ID for upsert)
- Circuit breaker: after N failures, stop retrying for T minutes to let system recover
🎤 One-Line Answer
"Resilient integration: async over sync, exponential backoff retry, dead letter queue for failed events, circuit breaker pattern, idempotent operations, and error logging with alerts — design for failure from day one."
🗄️ Section 8 — Data Modeling
Q101–Q110 · Intermediate → Advanced · Architecture Questions
Q101
What is an External ID field and why is it critical for data migrations?
✅ Direct Answer
External ID is a custom field marked as External ID — it stores a unique identifier from an external system, gets automatically indexed (making queries on it selective), and enables upsert operations that match records by external key instead of Salesforce ID. Critical for data migrations where Salesforce IDs don't exist yet in the source system.
🌍 Real World Example
SAP → Salesforce migration: SAP has Account records with SAP_Customer_Number (e.g., "CUST-12345"). Before importing contacts, accounts must exist. Upsert Accounts by SAP_Customer_Number__c (External ID). Then upsert Contacts — reference Account by SAP_Customer_Number without knowing Salesforce Account IDs. All relationships maintained without a two-step import. Accenture uses External IDs on every data migration project.
🔑 Key Points for Interviewer
- External ID: auto-indexed (selective queries), unique constraint optional
- Max 25 External ID fields per object (increased from 7 in recent releases)
- Upsert: finds existing record by External ID, updates if found, inserts if not
- Cross-object reference: relate child to parent by parent's External ID in Data Loader
- Essential for two-way sync — SAP → Salesforce → SAP without ID translation layer
🎤 One-Line Answer
"External ID stores source system's key, gets auto-indexed, and enables upsert — critical for data migrations where source system IDs must be preserved and used to maintain relationships across objects."
Q102
What is a polymorphic relationship in Salesforce?
✅ Direct Answer
A polymorphic relationship is a lookup that can point to multiple different object types — the same field can reference an Account, Contact, or Lead depending on the record. Task and Event use polymorphic WhoId (Contact or Lead) and WhatId (any object). Custom polymorphic lookups can be created with relationship field type "Lookup (Multiple Objects)".
🌍 Real World Example
// Task WhoId is polymorphic - can be Contact or Lead
Task t = [SELECT Id, WhoId, Who.Name, Who.Type FROM Task LIMIT 1];
// Who.Type = 'Contact' or 'Lead' depending on the record
// SOQL for polymorphic fields:
List<Task> tasks = [SELECT Id, WhoId, Who.Name,
TYPEOF Who
WHEN Contact THEN Phone, Email
WHEN Lead THEN Company, Status
END
FROM Task WHERE WhoId != null];
🔑 Key Points for Interviewer
- WhoId: Contact or Lead — standard polymorphic on Task/Event
- WhatId: Account, Opportunity, Case, or any object — standard on Task/Event
- TYPEOF keyword: handles polymorphic field type checking in SOQL
- Custom polymorphic: create lookup with multiple allowed object types
- Reports can't easily filter polymorphic fields — need custom workarounds
🎤 One-Line Answer
"Polymorphic relationship links to multiple object types — Task WhoId can be Contact or Lead. Use TYPEOF keyword in SOQL to handle type-specific field access on polymorphic relationship fields."
Q103
What is a self-referential relationship and when do you use it?
✅ Direct Answer
A self-referential relationship is a lookup or master-detail where an object relates to itself — used for hierarchical data like employee-manager relationships, account hierarchies, or category trees. Salesforce has built-in Account Hierarchy using the ParentId field (Account lookup to Account).
🌍 Real World Example
Account Hierarchy: Global Company → Regional Division → Country Office — all Account records, ParentId points to parent Account. Employee hierarchy: Employee__c with Manager__c (Lookup to Employee__c) — same object, self-referential. Category tree: Product_Category__c with Parent_Category__c (Lookup to Product_Category__c) — allows unlimited nesting of categories. Max 5 levels of parent relationship traversal in SOQL (Parent.Parent.Parent.Parent.Parent.Name).
🔑 Key Points for Interviewer
- Account Hierarchy: built-in via ParentId → enables Account hierarchy reports
- Custom: create Lookup field on object pointing to same object
- SOQL traversal: up to 5 levels of Parent.Parent.Parent in one query
- Circular references: A → B → A causes issues — prevent with validation
- Hierarchy reports: Salesforce has special hierarchy report type for Account
🎤 One-Line Answer
"Self-referential relationship creates hierarchy within one object — Account uses ParentId for Account hierarchy. Max 5 levels of SOQL traversal. Prevent circular references with validation rules."
Q104
What are Big Objects in Salesforce and when would you use them?
✅ Direct Answer
Big Objects (__b suffix) store massive volumes of data (billions of records) — designed for archival, historical data, and audit trails that would be too expensive to store in standard objects. They support only async SOQL via Async SOQL (not standard SOQL), have limited DML, no triggers, no workflows, and can't be used in standard reports.
🌍 Real World Example
Financial client: 5 years of transaction history = 2 billion records. Storing in standard Salesforce objects: cost-prohibitive (storage limits), performance degradation. Solution: archive transactions older than 1 year to Transaction_Archive__b (Big Object). Current-year transactions stay in Transaction__c (standard object, fast SOQL, reports). Historical queries via Async SOQL on Big Object — slower but handles billions of records.
🔑 Key Points for Interviewer
- Big Objects: __b suffix, billions of records, append-only design
- No SOQL — use Async SOQL (async query framework)
- No triggers, no workflows, no standard reports
- Index-based lookups required: query must include all index fields
- Use case: audit trails, event logs, historical data archival
🎤 One-Line Answer
"Big Objects (__b) store billions of records for archival/audit use cases — no standard SOQL (use Async SOQL), no triggers or reports. Ideal for historical transaction data that's too large for standard objects."
Q105
What is a Skinny Table in Salesforce?
✅ Direct Answer
A Skinny Table is a Salesforce internal database optimization — a hidden shadow table containing only frequently queried fields from a large object. When Salesforce detects a performance problem, they can create a skinny table that contains just the columns needed for a specific query, dramatically improving query performance for large data volumes.
🌍 Real World Example
Client with 5M Account records: reports always filter on Industry, Region__c, and CreatedDate but full Account row has 150 fields. Report runs in 25 seconds — too slow. Raised a Salesforce Support case requesting a Skinny Table on those 3 fields. Salesforce creates an internal index table with just those columns. Report speed: 25 seconds → 2 seconds. No change to data model or queries — invisible performance improvement.
🔑 Key Points for Interviewer
- Not self-service — must request via Salesforce Support case
- Internal Salesforce optimization, invisible to developers
- Best for: objects with 100K+ records, specific frequently-queried field patterns
- Max 100 fields per skinny table
- Must re-request if fields are added to the skinny table criteria
🎤 One-Line Answer
"Skinny Table is a Salesforce-internal shadow table of frequently-queried fields — requested via Support case, invisible to code, dramatically improves query performance on large objects with specific filter patterns."
Q106
What is the difference between a lookup filter and a validation rule?
✅ Direct Answer
Lookup Filter: restricts which records appear in a lookup search/dialog — applied on the relationship field itself, limits what users can select. Optional (shows all if no match) or required (blocks save if no valid match). Validation Rule: validates field values before save — throws an error if the formula evaluates to true. Both prevent bad data but at different points.
🌍 Real World Example
Opportunity Product lookup filter: only shows Products where Product_Family__c = Opportunity.Product_Family__c — users can only select products matching the opportunity's family. Without lookup filter, users could accidentally select wrong products. Validation Rule: IF(Amount > 100000 AND ISBLANK(Approver__c), true, false) — blocks save if high-value deal has no approver. Lookup filter = UI-level selection control. Validation = data-integrity save control.
🔑 Key Points for Interviewer
- Lookup Filter: controls what appears in lookup search, on relationship field
- Validation Rule: formula-based, fires on save, can use complex logic
- Lookup filter optional vs required: optional = warning, required = error
- Lookup filters can reference fields on both parent and child objects
- Both can be bypassed by System Administrator profile
🎤 One-Line Answer
"Lookup Filter restricts what appears in the lookup search dialog (UI control). Validation Rule fires on save with formula-based error (data control). Use both for layered data quality: filter what they can choose, validate what they actually enter."
Q107
What is a cross-object formula field and what are its limitations?
✅ Direct Answer
A cross-object formula field references fields from a related parent object (via lookup or master-detail) using dot notation — e.g., Account.Industry on Contact. Limitations: read-only (can't be edited), not searchable, can traverse up to 10 relationship levels, can't reference child records (only parent), doesn't support some field types (long text area, geolocation).
🌍 Real World Example
Opportunity formula field: Account_Industry__c = Account.Industry. Shows Account's industry on Opportunity records without any code — auto-updates when Account's Industry changes. Used in reports to filter Opportunities by Account Industry without a SOQL join. Limitation hit: needed to show sum of child records on parent — cross-object formula can't do that (goes down the hierarchy). Solution: roll-up summary field instead.
🔑 Key Points for Interviewer
- Parent traversal only: Account.Industry, Contact.Account.Name (up to 10 levels)
- Not editable: formula fields are read-only
- Not searchable: can't use in SOQL WHERE unless indexed
- Auto-recalculates: when parent field changes, formula updates automatically
- Can't go down hierarchy: use roll-up summary for aggregating children
🎤 One-Line Answer
"Cross-object formula uses dot notation to reference parent fields (Account.Industry on Opportunity) — read-only, up to 10 levels, auto-recalculates. Can only traverse UP the hierarchy; use roll-up summary to aggregate DOWN."
Q108
What are the best practices for designing a Salesforce data model for large enterprises?
✅ Direct Answer
Key practices: design for data skew prevention (<10,000 children per parent), use External IDs for all integration keys, choose relationship types carefully (MD vs Lookup), plan for large data volumes from the start (indexes, skinny tables), minimize wide objects (many fields), use Custom Metadata for configuration, and document with entity relationship diagrams.
🌍 Real World Example
Accenture FSI data model review: Client rejected initial model that had one Master Account with 500K transactions (skew risk). Recommended: Account hierarchy (Global → Regional → Branch → Client) with transactions distributed across branch accounts. Added External IDs for all integration keys. Indexed high-frequency query fields. Documented with ERD. Post-redesign: zero lock errors, 80% faster queries, clean integration layer.
🔑 Key Points for Interviewer
- Data skew: keep < 10,000 children per parent
- External IDs: on every object that integrates with external systems
- Relationship type: MD for tight coupling/roll-up; Lookup for optional relationships
- Wide objects: more than 300 fields can impact performance
- ERD documentation: mandatory on Accenture projects for knowledge transfer
🎤 One-Line Answer
"Enterprise data model best practices: prevent data skew (<10K children/parent), External IDs for all integration keys, right relationship type, indexes on queried fields, ERD documentation, and avoid wide objects with 300+ fields."
Q109
What is the difference between a picklist and a multi-select picklist?
✅ Direct Answer
Picklist: single value selection — stored as a string. Multi-select picklist: multiple values selectable — stored as semicolon-separated string (e.g., "Option1;Option2;Option3"). Multi-select picklists have significant limitations: can't be used in roll-up summary filters, limited formula support, harder to query in SOQL, can't be used in most API operations efficiently.
🌍 Real World Example
Poor design: Regions__c as multi-select picklist (North;South;East;West). Trying to SOQL for all records including "North": WHERE Regions__c INCLUDES ('North') — works but can't be indexed. Reporting on combinations is impossible. Better design: separate boolean fields (Is_North__c, Is_South__c) or a junction object (Account ↔ Region). Multi-select picklist: only use when truly needed and reporting on individual values isn't required.
🔑 Key Points for Interviewer
- Picklist: single value, string storage, full SOQL/report support
- Multi-select: semicolon-separated, limited SOQL (INCLUDES operator)
- INCLUDES/EXCLUDES: SOQL operators for multi-select queries
- Avoid multi-select for data you'll need to report or aggregate on
- Alternative: junction object or individual boolean fields per option
🎤 One-Line Answer
"Picklist = single value, full SOQL/report support. Multi-select = semicolon-separated, limited reporting, use INCLUDES in SOQL — avoid when you need to aggregate or report on individual values."
Q110
What is a Record Type and when do you use it?
✅ Direct Answer
Record Types allow different page layouts, picklist values, and business processes for different user profiles on the same object — without creating separate objects. A Lead can have a "Marketing Lead" Record Type (showing marketing fields, marketing picklist values) and a "Sales Lead" Record Type (showing different layout and sales-specific picklist values).
🌍 Real World Example
Case object with 3 Record Types: "Technical Support" (shows technical fields, priority SLA picklist), "Billing Inquiry" (shows billing fields, different picklist values), "Product Feedback" (shows feedback fields, sentiment picklist). Same Case object, same workflow, same reports — but each team sees only relevant fields and appropriate picklist values. Profiles control which Record Types each team can create.
🔑 Key Points for Interviewer
- Record Type: controls page layout, picklist values, business process per profile
- Profile assignment: which Record Types a profile can create/see
- Default Record Type: assigned when no selection is needed
- Record Types in SOQL: WHERE RecordType.Name = 'Technical Support'
- Avoid too many Record Types: creates maintenance complexity
🎤 One-Line Answer
"Record Types enable different page layouts and picklist values per user profile on one object — use for genuinely different business processes on the same object; avoid when simple field visibility rules would suffice."
🎯 Section 9 — Accenture Scenario Questions
Q111–Q120 · Advanced · Real Interview Scenarios from Candidates
Q111
SCENARIO: A client has 3 different business units all using Salesforce. They want shared customer data but each BU has different fields, processes, and security requirements. What architecture do you recommend?
✅ Direct Answer
Evaluate two options: Single Org with Business Unit separation (using Record Types, Permission Sets, OWD + Sharing) vs Multi-Org strategy. Single Org: shared Account/Contact data, BU-specific objects, Profile/Permission Set per BU, sharing rules for cross-BU visibility. Multi-Org: complete isolation but requires complex cross-org data sharing. Recommend Single Org unless regulatory/compliance requirements mandate isolation.
🌍 Real World Example
Accenture recommendation: Single Org with Record Types (BU1_Opportunity, BU2_Opportunity, BU3_Opportunity). Shared Account/Contact objects (OWD = Private, sharing rules per BU). BU-specific custom objects hidden via Profile. Centralized reports at leadership level — one dashboard across all BUs. Cross-BU deals: manual sharing or criteria-based sharing rule. 90% of requirements met declaratively; <10% needs Apex for complex cross-BU logic.
🔑 Key Points for Interviewer
- Single Org pros: shared data, unified reports, lower TCO, one integration point
- Multi-Org pros: complete isolation, independent releases, separate governance
- Decision factors: data sovereignty requirements, release independence needs
- Cross-org data sharing: Salesforce-to-Salesforce (S2S) integration if Multi-Org
- Accenture preference: Single Org unless compliance mandates otherwise
🎤 One-Line Answer
"Single Org with Record Types, Permission Sets, and sharing rules handles 90% of multi-BU requirements at lower cost and complexity. Only recommend Multi-Org when regulatory requirements mandate complete data isolation."
Q112
SCENARIO: Users are reporting that a Lightning page is very slow to load. How do you diagnose and fix it?
✅ Direct Answer
Diagnose systematically: 1) Browser DevTools Network tab — which LWC component takes longest? 2) Check for @wire calls that trigger multiple Apex queries on load. 3) Check for non-cacheable Apex (cacheable=false loads slower). 4) Check page component count — too many components = too many parallel requests. 5) Salesforce Lightning Inspector Chrome extension for component analysis.
🌍 Real World Example
Account detail page taking 8 seconds to load. Analysis: 5 LWC components each making separate Apex calls on connectedCallback — 5 serial database queries. Fix: 1) Combined 5 queries into one optimized Apex method with @AuraEnabled(cacheable=true). 2) Used @wire instead of imperative — cacheable queries use server-side cache. 3) Added SOQL indexes on frequently-queried fields. Result: load time dropped from 8 seconds to 1.5 seconds.
🔑 Key Points for Interviewer
- Lightning Inspector: Chrome extension for LWC performance profiling
- cacheable=true: Apex results cached client-side, reduces server round trips
- Lazy loading: don't load all component data on initial render
- Limit components per page: each component = separate rendering cost
- SOQL optimization: selective queries, indexed fields, avoid N+1 patterns
🎤 One-Line Answer
"Diagnose with Lightning Inspector and Browser DevTools — look for multiple serial Apex calls on load. Fix by consolidating queries, using cacheable=true, adding SOQL indexes, and implementing lazy loading for non-critical data."
Q113
SCENARIO: A Batch Apex job that processes 1 million records is failing intermittently with CPU limit errors. How do you fix it?
✅ Direct Answer
Reduce batch size (try 50-100 instead of 200), optimize SOQL queries in execute() (ensure they're selective), move complex calculations to a separate Queueable called from finish(), use Map-based lookups instead of nested loops in execute(), and profile CPU usage with Limits.getCpuTime() to identify the hotspot.
🌍 Real World Example
// Root cause: nested loop in execute() = O(n²) complexity
// BAD
for(Account a : scope) {
for(Contact c : allContacts) { // Scans all contacts per account
if(c.AccountId == a.Id) { /* process */ }
}
}
// FIX: Map for O(1) lookup instead of O(n) scan
Map<Id, List<Contact>> contactMap = /* pre-built in start() or early in execute() */;
for(Account a : scope) {
List<Contact> contacts = contactMap.get(a.Id); // O(1) lookup
}
// Also reduced batch size: Database.executeBatch(new MyBatch(), 50);
🔑 Key Points for Interviewer
- Smaller batch size: each execute() gets fresh 10-second CPU limit
- Limits.getCpuTime(): monitor remaining CPU in execute()
- Map over nested loops: O(1) vs O(n) lookup performance
- Database.Stateful: build Maps in start() or carry state between chunks
- Profile: Developer Console query plan + execute timing per batch
🎤 One-Line Answer
"Reduce batch size (50-100), replace nested loops with Map-based O(1) lookups, use Limits.getCpuTime() to find the hotspot, and move complex post-processing to a Queueable from finish()."
Q114
SCENARIO: You need to display real-time stock prices from an external API on an Account page. How do you design this?
✅ Direct Answer
LWC component on Account page with a "Refresh" button triggering an imperative Apex callout (non-cacheable, so always fresh). The Apex method makes an HTTP callout via Named Credential to the stock price API. Display results in the LWC. For auto-refresh: use setInterval in JavaScript to re-call the Apex method every 30 seconds. Add loading spinner and error handling.
🌍 Real World Example
// LWC JS - auto-refreshing stock price
import { LightningElement, api, wire } from 'lwc';
import getStockPrice from '@salesforce/apex/StockService.getStockPrice';
export default class StockPriceWidget extends LightningElement {
@api stockSymbol;
price; error; isLoading; refreshInterval;
connectedCallback() {
this.fetchPrice();
this.refreshInterval = setInterval(() => this.fetchPrice(), 30000); // Every 30s
}
disconnectedCallback() {
clearInterval(this.refreshInterval); // Clean up!
}
async fetchPrice() {
this.isLoading = true;
try {
this.price = await getStockPrice({ symbol: this.stockSymbol });
this.error = null;
} catch(e) { this.error = e.body.message; }
finally { this.isLoading = false; }
}
}
🔑 Key Points for Interviewer
- Non-cacheable Apex: fresh data every call
- Named Credential: secure stock API authentication
- setInterval: auto-refresh every N seconds
- clearInterval in disconnectedCallback: prevent memory leaks
- Loading spinner: essential UX for real-time data components
🎤 One-Line Answer
"LWC with setInterval calling non-cacheable Apex imperative method every 30 seconds, Apex calls stock API via Named Credential — clear the interval in disconnectedCallback to prevent memory leaks."
Q115
SCENARIO: Your trigger is causing "UNABLE_TO_LOCK_ROW" errors during business hours. What is the root cause and fix?
✅ Direct Answer
UNABLE_TO_LOCK_ROW means multiple concurrent transactions are trying to modify the same record simultaneously — one holds the lock, others timeout after 10 seconds waiting. Root causes: data skew (many child records under one parent causing parent lock), trigger updates same records multiple users edit, or batch job + user edits competing on same records.
🌍 Real World Example
Investigation: 30 sales reps all updating Opportunities under one "Global" Account simultaneously. Trigger on Opportunity (after update) updating the parent Account — all 30 triggering Account updates compete for the Account lock. Fix: 1) Created Account hierarchy — distributed Opportunities across 5 Regional Accounts (one per region). 2) Used batch updates for Account aggregation instead of real-time trigger. 3) Added try-catch with retry logic in the trigger for remaining edge cases.
🔑 Key Points for Interviewer
- Root causes: data skew, competing triggers, batch + manual edits
- Fix data skew: restructure data model, account hierarchy
- FOR UPDATE in SOQL: explicit locking (use carefully)
- Defer sharing: Setup option to reduce sharing recalculation lock conflicts
- Async trigger actions: move lock-prone updates to Queueable
🎤 One-Line Answer
"UNABLE_TO_LOCK_ROW = concurrent transactions competing for same record lock — fix data skew (account hierarchy), move trigger updates to async Queueable, and add retry logic for remaining concurrent access edge cases."
Q116
SCENARIO: A client wants to migrate from on-premise CRM to Salesforce with 5 million Account records, 20 million Contact records, and 50 million historical transactions. What is your migration strategy?
✅ Direct Answer
Phased migration over 6+ months: Phase 1 — Data model design + External ID mapping. Phase 2 — Reference data (Picklists, Users, Custom Objects). Phase 3 — Accounts (parent records first). Phase 4 — Contacts linked to Accounts. Phase 5 — Transactions (use Bulk API, Big Objects for historical data older than 2 years). Phase 6 — Parallel running + validation. Phase 7 — Cutover.
🌍 Real World Example
Accenture FSI migration: Tools: Data Loader with Bulk API 2.0 for current data, custom scripts for 50M transactions. Process: nightly delta loads during parallel running (new records from old CRM → Salesforce). Validation: record counts + hash checks on 10% sample. Historical transactions > 3 years → Big Objects (not in standard reports, but accessible for compliance). Cutover: weekend, 4 hours, pre-validated go/no-go checklist. Zero data loss.
🔑 Key Points for Interviewer
- Parent before child: Accounts → Contacts → Transactions (referential integrity)
- External IDs: map all records to source system IDs for upsert idempotency
- Bulk API 2.0: 150M records/day limit, CSV-based
- Historical data: Big Objects for billions of records beyond standard storage
- Parallel running: both systems live, validate outputs match
🎤 One-Line Answer
"Phased migration: parent records first (Accounts), then children (Contacts), then transactions (Bulk API). Historical data > 2 years → Big Objects. External IDs on everything for idempotent upsert. Parallel running to validate before cutover."
Q117
SCENARIO: You have a trigger on Account that sends an email notification when the annual revenue crosses $1M. The trigger works in sandbox but not in production. What do you check?
✅ Direct Answer
Check in order: 1) Email deliverability settings (Setup → Deliverability) — production may have "System Email Only" while sandbox has "All Email." 2) Workflow email alerts vs Apex email — different limits. 3) SingleEmailMessage daily limits (5,000/day for standard orgs). 4) User's email address validity. 5) Email blocked by user's email server. 6) Trigger enabled/active in production (may be deactivated).
🌍 Real World Example
Exact scenario happened on Accenture project: emails worked in sandbox (Deliverability = All Email), failed in production (Deliverability = System Email Only). Root cause: production was set to System Email Only — only system-initiated emails (password resets, reports) sent. Fix: changed Production Deliverability to All Email. Also discovered: SingleEmailMessage limits were being hit (5K/day) on busy days — switched to mass email for bulk notifications.
🔑 Key Points for Interviewer
- Deliverability: System Email Only (production default) vs All Email
- Email limits: 5,000 SingleEmailMessage/day for standard orgs
- Test coverage: @isTest(seeAllData=false) emails don't actually send
- Debug logs: check for email errors in production debug log
- Email relay: corporate email relay can block Salesforce IPs
🎤 One-Line Answer
"Check email Deliverability settings first — production defaults to 'System Email Only' which blocks Apex emails. Then check SingleEmailMessage daily limits (5,000/day) and corporate email relay blocking Salesforce IPs."
Q118
SCENARIO: A Flow is creating duplicate records. How do you investigate and prevent this?
✅ Direct Answer
Investigation: check if Flow is set to "Run Once Per Record" (update triggers), check for recursive re-entry using $Record vs $Record__Prior entry conditions, verify the Flow isn't called from multiple places (another Flow, trigger, process builder all calling the same flow), and check if a loop element is creating records inside it.
🌍 Real World Example
Duplicate Task records created on every Opportunity update. Debug: Flow had no entry condition — fired on every Opportunity save including saves from other automations. Plus the Flow itself was updating Opportunity Description → triggering the Flow again. Fix: 1) Added entry condition: Close_Date__c CHANGED. 2) Set "Only when record is updated to meet condition requirements." Duplicates stopped. Additionally added a Duplicate Rule on Task object as a safety net.
🔑 Key Points for Interviewer
- Entry conditions: the first line of defense against unwanted Flow execution
- $Record vs $Record__Prior: ensure specific field change, not every save
- Duplicate Rules: Setup → Duplicate Management as safety net
- Flow debug: use "Debug" button in Flow Builder to trace execution path
- Check all callers: trigger + process builder + another flow may all call the same flow
🎤 One-Line Answer
"Duplicate records from Flow: check entry conditions (add $Record__Prior comparison), verify recursion prevention, find all callers (triggers/other flows calling same flow), and add Duplicate Rules as a safety net."
Q119
SCENARIO: Accenture is presenting to a CIO about why they should choose Salesforce over Microsoft Dynamics. What are your key points?
✅ Direct Answer
Key Salesforce differentiators: Industry-specific clouds (Financial Services, Health, Manufacturing) with pre-built data models. AppExchange: 6,000+ pre-built solutions vs smaller Dynamics marketplace. Salesforce Shield for regulated industry compliance. AI-first with Einstein/Agentforce native. Trailhead ecosystem for faster user adoption. MuleSoft for enterprise integration. 3 automatic releases/year vs Dynamics update cycles.
🌍 Real World Example
Healthcare client choosing between Salesforce Health Cloud and Dynamics 365 Healthcare: Salesforce had pre-built patient timeline, care plan management, and provider network management vs. Dynamics needing extensive customization. Salesforce's HIPAA compliance (Shield Encryption, BYOK) met regulatory requirements out-of-box. AppExchange had 50+ healthcare-specific ISV solutions. Decision: Salesforce. Total implementation time: 6 months vs 12+ months estimated for Dynamics custom build.
🔑 Key Points for Interviewer
- Industry Clouds: Salesforce has deepest industry-specific data models
- AppExchange: 6,000+ apps vs Dynamics marketplace
- Trailhead: free learning platform — faster team upskilling
- Accenture: global Salesforce partner with 50,000+ trained consultants
- TCO: compare full implementation + ongoing maintenance, not just license
🎤 One-Line Answer
"Salesforce wins on Industry Clouds depth, AppExchange breadth (6K+ apps), native AI (Einstein/Agentforce), compliance-ready Shield, and Trailhead adoption ecosystem — Dynamics requires more custom build for equivalent functionality."
Q120
SCENARIO: You've been asked to lead a team of 5 Salesforce developers on an Accenture project. A junior developer's code has poor quality — SOQL in loops, no error handling, no test coverage. How do you handle this?
✅ Direct Answer
Constructive approach: 1) Code review process — establish PR reviews before any code merges. 2) Pair programming — senior developer codes with junior for 2-3 weeks. 3) Create coding standards document — team-agreed patterns (trigger framework, bulkification rules, test coverage minimums). 4) Automated quality gates in CI/CD — PMD static analysis, 85% coverage required. 5) Recognition when standards are met — positive reinforcement.
🌍 Real World Example
On an Accenture project: junior developer submitted trigger with SOQL in loop — caught in code review before merge. Instead of just rejecting, did a pair session: "Let me show you why this hits governor limits and how Maps solve it." Shared the team's coding standards document. Added PMD static analysis to GitHub Actions — SOQL in loops now fails the build automatically. Junior developer became the team's strongest advocate for code quality within 2 months.
🔑 Key Points for Interviewer
- Process over criticism: code review system prevents bad code reaching main
- Education over blame: pair programming teaches, doesn't demoralize
- Automation: PMD, SFDX Scanner in CI/CD catches issues automatically
- Standards documentation: removes ambiguity about expectations
- Accenture values: collaborative, respectful, growth-oriented team culture
🎤 One-Line Answer
"Address with process: code review gates before merge, pair programming to teach patterns, team coding standards document, and automated PMD quality gates in CI/CD — build the system so quality is enforced, not just expected."
🔗 Continue Your Salesforce Interview Prep
🚀 Bookmark sfinterviewpro.com
750+ free Salesforce interview questions across 20+ topics. No paywall. No signup. Updated regularly with new company-specific posts.