Its been a month since I added this certification in my list, this is one of the designer exam which I wanted to give in the first place, its averagely complex but not much difficult. So if you have atleast 4 years of salesforce experience that includes Data management practice, and experience with high volume data handling, then a little brush up of work can make you pass this certification.
This blog post is all about thing to cover up before going for the exam.
The Salesforce Certified Data Architecture and Management Designer candidate has the experience, skills, and knowledge of:
- Data modeling/Database Design o Custom fields, master detail, lookup relationships o Client requirements and mapping to database requirements
- Standard object structure for sales and service cloud o Making best use of Salesforce standard objects
- Association between standard objects and Salesforce license types
- Large Data Volume considerations o Indexing, LDV migrations, performance
- Salesforce Platform declarative and programming concepts
- Scripting using those tools (Data loader, ETL platforms)
- Data Stewardship
- Data Quality Skills (concerned with clean data)
Overview :
- Content: 60 multiple-choice/multiple-select questions* (2-5 unscored questions may be added)
- Time allotted to complete the exam: 90 minutes (time allows for unscored questions)
- Passing Score: 67%
- Registration fee: USD 400, plus applicable taxes as required per local law
- Retake fee: USD 200
PK Chunking Header
Use the PK Chunking request header to enable automatic primary key (PK) chunking for a bulk query job. PK chunking splits bulk queries on very large tables into chunks based on the record IDs, or primary keys, of the queried records. Each chunk is processed as a separate batch that counts toward your daily batch limit, and you must download each batch’s results separately. PK chunking is supported for the following objects: Account, Campaign, CampaignMember, Case, Contact, Lead, LoginHistory, Opportunity, Task, User, and custom objects. PK chunking works only with queries that don’t include SELECT clauses or conditions other than WHERE.
PK chunking is designed for extracting data from entire tables, but you can also use it for filtered queries. Because records could be filtered from each query’s results, the number of returned results for each chunk can be less than the chunk size. Also, the IDs of soft-deleted records are counted when the query is split into chunks, but the records are omitted from the results. Therefore, if soft-deleted records fall within a given chunk’s ID boundaries, the number of returned results is less than the chunk size.
Multitenancy and Metadata
Multitenancy and Metadata Overview
Multitenancy is a means of providing a single application to multiple organizations, such as different companies or departments within
a company, from a single hardware-software stack. Instead of providing a complete set of hardware and software resources to each
organization, Salesforce inserts a layer of software between the single instance and each organization’s deployment. This layer is invisible
to the organizations, which see only their own data and schemas while Salesforce reorganizes the data behind the scenes to perform
efficient operations.
Multitenancy requires that applications behave reliably, even when architects are making Salesforce-supported customizations, which
include creating custom data objects, changing the interface, and defining business rules. To ensure that tenant-specific customizations
do not breach the security of other tenants or affect their performance, Salesforce uses a runtime engine that generates application
components from those customizations. By maintaining boundaries between the architecture of the underlying application and that of
each tenant, Salesforce protects the integrity of each tenant’s data and operations
Skinny Tables
Salesforce creates skinny tables to contain frequently used fields and to avoid joins, and it keeps the skinny tables in sync with their source
tables when the source tables are modified. To enable skinny tables, contact Salesforce Customer Support.
For each object table, Salesforce maintains other, separate tables at the database level for standard and custom fields. This separation
ordinarily necessitates a join when a query contains both kinds of fields. A skinny table contains both kinds of fields and does not include
soft-deleted records.
This table shows an Account view, a corresponding database table, and a skinny table that would speed up Account queries.
- Skinny tables can contain a maximum of 100 columns.
- Skinny tables cannot contain fields from other objects.
- For Full sandboxes: Skinny tables are copied to your Full sandbox organizations, as of the Summer '15 release. Previously, they were not.
For other types of sandboxes: Skinny tables are still not copied to your sandbox organizations. To have production skinny tables
activated for sandbox types other than Full sandboxes, contact Salesforce Customer Support.
INDEX TABLES
The Salesforce multitenant architecture makes the underlying data table for custom fields unsuitable for indexing. To overcome this
limitation, the platform creates an index table that contains a copy of the data, along with information about the data types.
The platform builds a standard database index on this index table, which places upper limits on the number of records that can be
returned more effectively by an indexed search.
Force.com Bulk API
The REST-based Bulk API was developed specifically to simplify the process of uploading large amounts of data. It is optimized for inserting, updating, upserting, and deleting large numbers of records asynchronously by submitting them in batches to Force.com, to be processed in the background.
Uploaded records are streamed to Force.com to create a new job. As the data rolls in for the job it is stored in temporary storage and then sliced up into user-defined batches (max of 10,000 records). Even while your data is still being sent to the server, the Force.com platform submits the batches for processing.
Using the Bulk API and Monitoring Jobs
The Salesfore.com Data Loader v17 supports the Bulk API so it's easy to get started upload large datasets. Your user profile must have the "API Enabled" permission selected so if you are a System Administrator, you are all set. To get started, open up the Data Loader and edit the settings for the Bulk API.
Reference Links :
- https://trailhead.salesforce.com/
- Salesforce Large Data Volumes
- Exam Guide
- 6 steps toward top data quality
All the best for the exam!!