Information environments in data-driven organizations are altering to fulfill the rising calls for for analytics, together with enterprise intelligence (BI) dashboarding, one-time querying, knowledge science, machine studying (ML), and generative AI. These organizations have an enormous demand for lakehouse options that mix the perfect of knowledge warehouses and knowledge lakes to simplify knowledge administration with quick access to all knowledge from their most popular engines.
Amazon SageMaker Lakehouse unifies all of your knowledge throughout Amazon Easy Storage Service (Amazon S3) knowledge lakes and Amazon Redshift knowledge warehouses, serving to you construct highly effective analytics and synthetic intelligence and machine studying (AI/ML) functions on a single copy of information. SageMaker Lakehouse provides you the flexibleness to entry and question your knowledge in place with all Apache Iceberg appropriate instruments and engines. It secures your knowledge within the lakehouse by defining fine-grained permissions, that are persistently utilized throughout all analytics and ML instruments and engines. You may convey knowledge from operational databases and functions into your lakehouse in close to actual time by way of zero-ETL integrations. It accesses and queries knowledge in-place with federated question capabilities throughout third-party knowledge sources by way of Amazon Athena.
With SageMaker Lakehouse, you may entry tables saved in Amazon Redshift managed storage (RMS) by way of Iceberg APIs, utilizing the Iceberg REST catalog backed by AWS Glue Information Catalog. This expands your knowledge integration workload throughout knowledge lakes and knowledge warehouses, enabling seamless entry to various knowledge sources.
Amazon SageMaker Unified Studio, Amazon EMR 7.5.0 and better, and AWS Glue 5.0 natively assist SageMaker Lakehouse. This publish describes the way to combine knowledge on RMS tables by way of Apache Spark utilizing SageMaker Unified Studio, Amazon EMR 7.5.0 and better, and AWS Glue 5.0.
Easy methods to entry RMS tables by way of Apache Spark on AWS Glue and Amazon EMR
With SageMaker Lakehouse, RMS tables are accessible by way of the Apache Iceberg REST catalog. Open supply engines similar to Apache Spark are appropriate with Apache Iceberg, and so they can work together with RMS tables by configuring this Iceberg REST catalog. You may study extra in Connecting to the Information Catalog utilizing AWS Glue Iceberg REST extension endpoint.
Word that the Iceberg REST extensions endpoint is used while you entry RMS tables. This endpoint is accessible by way of the Apache Iceberg AWS Glue Information Catalog extensions, which comes preinstalled on AWS Glue 5.0 and Amazon EMR 7.5.0 or larger. The extension library permits entry to RMS tables utilizing the Amazon Redshift connector for Apache Spark.
To entry RMS backed catalog databases from Spark, every RMS database requires its personal Spark session catalog configuration. Listed here are the required Spark configurations:
Spark config key | Worth |
spark.sql.catalog.{catalog_name} |
org.apache.iceberg.spark.SparkCatalog |
spark.sql.catalog.{catalog_name}.sort |
glue |
spark.sql.catalog.{catalog_name}.glue.id |
{account_id}:{rms_catalog_name}/{database_name} |
spark.sql.catalog.{catalog_name}.shopper.area |
{aws_region} |
spark.sql.extensions |
org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions |
Configuration parameters:
{catalog_name}
: Your chosen title for referencing the RMS catalog database in your software code{rms_catalog_name}
: The RMS catalog title as proven within the AWS Lake Formation catalogs part{database_name}
: The RMS database title{aws_region}
: The AWS Area the place the RMS catalog is situated
For a deeper understanding of how the Amazon Redshift hierarchy (databases, schemas, and tables) is mapped to the AWS Glue multilevel catalogs, you may confer with the Bringing Amazon Redshift knowledge into the AWS Glue Information Catalog documentation.
Within the following part, we exhibit the way to entry RMS tables by way of Apache Spark utilizing SageMaker Unified Studio JupyterLab notebooks with the AWS Glue 5.0 runtime and Amazon EMR Serverless.
Though we are able to convey current Amazon Redshift tables into the AWS Glue Information catalog by making a Lakehouse Redshift catalog from an current Redshift namespace and supply entry to a SageMaker Unified Studio venture, within the following instance, you’ll create a managed Amazon Redshift Lakehouse catalog instantly from SageMaker Unified Studio and work with that.
Stipulations
To comply with these directions, you could have the next stipulations:
Create a SageMaker Unified Studio venture
Full the next steps to create a SageMaker Unified Studio venture:
- Sign up to SageMaker Unified Studio.
- Select Choose a venture on the highest menu and select Create venture.
- For Venture title, enter
demo
. - For Venture profile, select All capabilities.
- Select Proceed.
- Go away the default values and select Proceed.
- Overview the configurations and select Create venture.
You want to await the venture to be created. Venture creation can take about 5 minutes. When the venture standing adjustments to Energetic, choose the venture title to entry the venture’s house web page.
- Make observe of the Venture function ARN since you’ll want it for subsequent steps.
You’ve efficiently created the venture and famous the venture function ARN. The following step is to configure a Lakehouse catalog to your RMS.
Configure a Lakehouse catalog to your RMS
Full the next steps to configure a Lakehouse catalog to your RMS:
- Within the navigation pane, select Information.
- Select the
+
(plus) signal. - Choose Create Lakehouse catalog to create a brand new catalog and select Subsequent.
- For Lakehouse catalog title, enter
rms-catalog-demo
. - Select Add catalog.
- Await the catalog to be created.
- In SageMaker Unified Studio, select Information within the left navigation pane, then choose the three vertical dots subsequent to Redshift (Lakehouse) and select Refresh to verify the Amazon Redshift compute is lively.
Create a brand new desk within the RMS Lakehouse catalog:
- In SageMaker Unified Studio, on the highest menu, underneath Construct, select Question Editor.
- On the highest proper, select Choose knowledge supply.
- For CONNECTIONS, select Redshift (Lakehouse).
- For DATABASES, select
dev@rms-catalog-demo
. - For SCHEMAS, select public.
- Select Select.
- Within the question cell, enter and execute the next question to create a brand new schema:
- In a brand new cell, enter and execute the next question to create a brand new desk:
- In a brand new cell, enter and execute the next question to populate the desk with pattern knowledge:
- In a brand new cell, enter and run the next question to confirm the desk contents:
(Optionally available) Create an Amazon EMR Serverless software
IMPORTANT: This part is simply required for those who plan to check additionally utilizing Amazon EMR Serverless. If you happen to intend to make use of AWS Glue solely, you may skip this part fully.
- Navigate to the venture web page. Within the left navigation pane, choose Compute, then choose the Information processing Select Add compute.
- Select Create new compute sources, then select Subsequent.
- Choose EMR Serverless.
- Specify
emr_serverless_application
as Compute title, choose Compatibility as Permission mode, and select Add compute.
- Monitor the deployment progress. Await the Amazon EMR Serverless software to finish its deployment. This course of can take a minute.
Entry Amazon Redshift Managed Storage tables by way of Apache Spark
On this part, we exhibit the way to question tables saved in RMS utilizing a SageMaker Unified Studio pocket book.
- Within the navigation pane, select Information
- Beneath Lakehouse, choose the down arrow subsequent to
rms-catalog-demo
- Beneath dev, choose the down arrow subsequent
salesdb
, selectstore_sales
, and select the three dots
SageMaker Lakehouse offers a number of evaluation choices: Question with Athena, Question with Redshift, and Open in Jupyter Lab pocket book.
- Select Open in Jupyter Lab pocket book
- On the Launcher tab, select Python 3 (ipykernel)
In SageMaker Unified Studio JupyterLab, you may specify completely different compute varieties for every pocket book cell. Though this instance demonstrates utilizing AWS Glue compute (venture.spark.compatibility
), the identical code might be executed utilizing Amazon EMR Serverless by choosing the suitable compute within the cell settings. The next desk reveals the connection sort and compute values to specify when operating PySpark code or Spark SQL code with completely different engines:
Compute possibility | Pyspark code | Spark SQL | ||
Connection sort | Compute | Connection sort | Compute | |
AWS Glue | Pyspark | venture.spark.compatibility |
SQL | venture.spark.compatibility |
Amazon EMR Serverless | Pyspark | emr-s.emr_serverless_application |
SQL | emr-s.emr_serverless_application |
- Within the pocket book cell’s prime left nook, set Connection Sort to PySpark and choose
spark.compatibility
(AWS Glue 5.0) as Compute - Execute the next code to initialize the SparkSession and configure
rmscatalog
because the session catalog for accessing thedev
database underneath therms-catalog-demo
RMS catalog:
- Create a brand new cell and change the connection sort from PySpark to SQL to execute Spark SQL instructions instantly
- Enter the next SQL assertion to view all tables underneath
salesdb
(RMS schema) insidermscatalog
:
- In a brand new SQL cell, enter the next
DESCRIBE EXTENDED
assertion to view detailed details about thestore_sales
desk within thesalesdb
schema:
Within the output, you’ll observe that the Supplier is about to iceberg. This means that the desk is acknowledged as an Iceberg desk, regardless of being saved in Amazon Redshift managed storage.
- In a brand new SQL cell, enter the next
SELECT
assertion to view the content material of the desk
All through this instance, we demonstrated the way to create a desk in Amazon Redshift Serverless and seamlessly question it as an Iceberg desk utilizing Apache Spark inside a SageMaker Unified Studio pocket book.
Clear up
To keep away from incurring future expenses, clear up all created sources:
- Delete the created SageMaker Unified Studio venture. This step will mechanically delete Amazon EMR compute (for instance, the Amazon EMR Serverless software) that was provisioned from the venture:
- Inside SageMaker Studio, navigate to the demo venture’s Venture overview part.
- Select Actions, then choose Delete venture.
- Sort affirm and select Delete venture.
- Delete the created Lakehouse catalog:
- Navigate to the AWS Lake Formation web page within the Catalogs part.
- Choose the
rms-catalog-demo
catalog, select Actions, then choose Delete. - Within the affirmation window sort
rms-catalog-demo
after which select Drop.
Conclusion
On this publish, we demonstrated the way to use Apache Spark to work together with Amazon Redshift Managed Storage tables by way of Amazon SageMaker Lakehouse utilizing the Iceberg REST catalog. This integration offers a unified view of your knowledge throughout Amazon S3 knowledge lakes and Amazon Redshift knowledge warehouses, so you may construct highly effective analytics and AI/ML functions whereas sustaining a single copy of your knowledge.
For added workloads and implementations, go to Simplify knowledge entry to your enterprise utilizing Amazon SageMaker Lakehouse.
Concerning the Authors
Noritaka Sekiyama is a Principal Huge Information Architect with Amazon Internet Providers (AWS) Analytics providers. He’s accountable for constructing software program artifacts to assist prospects. In his spare time, he enjoys biking on his street bike.
Stefano Sandonà is a Senior Huge Information Specialist Answer Architect at Amazon Internet Providers (AWS). Obsessed with knowledge, distributed techniques, and safety, he helps prospects worldwide architect high-performance, environment friendly, and safe knowledge options.
Derek Liu is a Senior Options Architect based mostly out of Vancouver, BC. He enjoys serving to prospects resolve huge knowledge challenges by way of Amazon Internet Providers (AWS) analytic providers.
Raj Ramasubbu is a Senior Analytics Specialist Options Architect centered on huge knowledge and analytics and AI/ML with Amazon Internet Providers (AWS). He helps prospects architect and construct extremely scalable, performant, and safe cloud-based options on AWS. Raj offered technical experience and management in constructing knowledge engineering, huge knowledge analytics, enterprise intelligence, and knowledge science options for over 18 years previous to becoming a member of AWS. He helped prospects in numerous business verticals like healthcare, medical gadgets, life science, retail, asset administration, automobile insurance coverage, residential REIT, agriculture, title insurance coverage, provide chain, doc administration, and actual property.
Angel Conde Manjon is a Sr. EMEA Information & AI PSA, based mostly in Madrid. He has beforehand labored on analysis associated to knowledge analytics and AI in various European analysis initiatives. In his present function, Angel helps companions develop companies centered on knowledge and AI.
Appendix: Pattern script for Lake Formation FGAC enabled Spark cluster
If you wish to entry RMS tables from Lake Formation FGAC enabled Spark cluster on AWS Glue or Amazon EMR, confer with the next code instance: