SR Data Engineer - Lalitya Resume

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

NAME: LALITYA

Role: Sr. Data Engineer


Phone: +1 (774) 525-4367
Email: lalityarao222@gmail.com
LinkedIn: www.linkedin.com/in/lalityarao
PROFESSIONAL SUMMARY:

 10+ years of professional experience in Data Engineering, Data Science, AWS Engineer, and ETL development and
expertise in areas such as Database Development, ETL Development, Data Modeling, Report Development and Big
Data technologies.
 Expertise in designing data-intensive applications using the Hadoop Ecosystem, Big Data Analytics, Cloud Data
Engineering, Data Warehouse/ Data Mart, Data Visualization, and Reporting.
 Experience in Python development and scientific programming, using libraries such as NumPy and Pandas for data
manipulation, and machine learning libraries such as Scikit-Learn and Stats models for data mining. Familiarity with
deep learning libraries such as Tensor Flow and NLTK.
 Expertise in Agile methodologies, including Scrum stories and sprints in a Python-based environment, as well as data
analytics, data wrangling, and Excel data extraction.
 Excellent knowledge of Python software development, using libraries such as NumPy, SciPy, Matplotlib, Python-
Twitter, Pandas data frames, network, urllib2, and MySQL dB for database connectivity, as well as IDEs such as
Spyder, PyCharm, and Jupyter.
 The capacity to manage data integration projects using Informatica Intelligent Cloud Services (IICS) collaboration and
version control technologies and work collaboratively with cross-functional teams has been demonstrated.
 Competent in utilizing Terraform as Infrastructure as Code (IaC) to provide and manage resources in one or more
cloud environments, including AWS, Azure, and Google Cloud.
 Experience with installing backup, recovery, configuration and development on multiple Hadoop distribution
platforms Cloudera and Hortonworks including cloud platforms Amazon AWS and Google Cloud.
 Experience in AWS cloud Services (EMR, EC2, VPC, RDS, EBS, S3, Kinesis, Lambda, Glue, Athena, Elasticsearch, SQS,
IAM, Cloud Front, Cloud Watch, Autoscaling, DynamoDB, Redshift, ECS).
 Experience in AWS cloud infrastructure database migrations, PostgreSQL and converting existing ORACLE and MS SQL
Server databases to PostgreSQL, MySQL and Aurora.
 Experience in Building S3 buckets and managed policies for S3 buckets and used S3 bucket and Glacier for storage and
backup on AWS.
 Proven ability to develop customized PowerBI solutions tailored to specific business needs, enabling effective data
representation and insightful decision-making
 Solid Experience and understanding of Implementing large scale Data Warehousing Programs and E2E Data
Integration Solutions on Snowflake Cloud, AWS Redshift.
 Experience in implementing data movement from file systems to Azure Blob storage using Python API and conducting
proofs-of-concept for running machine learning models on Azure ML studio.
 Proficient in sourcing and extracting data from SAP systems, utilizing SAP's data extraction tools and techniques for
seamless data transfer and integration
 Experience writing Map Reduce programs using Apache Hadoop to analyze Big Data.
 Expertise in Tuning & Optimizing DB-relevant issues (SQL Tuning).
 Experience working with AWS services such as S3, Redshift and Google Cloud Platform (GCP)
 Experience working with text, sequence, parquet, and Avro file formats for data storage and retrieval.
 Expertise in using Sqoop and Spark to load data from MySQL and Oracle databases to HDFS and HBase for big data
processing.
TECHNICAL SKILLS:

BigData/Hadoop MapReduce, Spark, SparkSQL, Azure, Spark Streaming, AWS, Kafka, PySpark, Airflow, Pig, Hive,
Technologies Oozie, Zookeeper

Scripting Languages HTML5, CSS3, C, C++, XML, SAS, JAVA, Scala, Python, Shell Scripting, R

Google Cloud Platform GCP Cloud Storage, Big Query, Composer, Cloud Dataproc, Cloud SQL, Cloud Functions, Cloud
Pub/Sub

NO SQL Databases Cassandra, HBase, MongoDB

Development Tools Microsoft SQL Studio, Azure, Databricks, Eclipse, NetBeans, Azure Data Lake

Public Cloud EC2, S3, Autoscaling, CloudWatch, RedShift

Reporting Tools MS Office (Word/Excel/PowerPoint/ Visio/Outlook), Power BI, Tableau

Databases Microsoft SQL Server, MySQL, Oracle, Teradata, Netezza, Oracle11g /Oracle10g/9i,
Sybase12.5, Teradata, Oracle HCM and Flat Files

ETL Tools Informatica PowerCenter

Operating Systems All versions of Windows, UNIX, LINUX, Macintosh HD

PROFESSIONAL EXPERIENCE:
Client: TIAA Sep 2022 - Present
Role: Sr. Data Engineer

Responsibilities:
 Used Advanced Quantitative Cloud Analytics (AQCA) which is the strategic risk and analytics platform at UBS which is
Python.
 Built entirely on Microsoft Azure using Azure Data Factory (ADF) and running production workloads for several asset
classes, AQCA is an established pillar of our technology strategy with ambitious plans today, with inclusive plans for
today and in the years ahead.
 Data integration with Microsoft Azure cloud services like Azure Blob Storage, Azure Data Lake Storage, Azure SQL
Database, Azure Synapse Analytics, and other Azure services is made possible by proven experience using
Informatica Intelligent Cloud Services (IICS) to integrate data. This enables efficient and dependable data integration
in the cloud.
 Strong Informatica Intelligent Cloud Services (IICS) data transformation abilities are required to ensure data accuracy
and consistency in data integration processes with Azure data services. These skills include data mapping, data
profiling, data enrichment, data validation, and data masking.
 Successfully implemented data migration strategies from SAP to Snowflake, ensuring data integrity and consistency
throughout the process
 Implemented data mart optimization strategies, including index tuning, query optimization, and data partitioning, to
enhance data retrieval speed and overall system performance for complex data analysis and reporting task.
 Implemented and executed comprehensive API integrations to facilitate enhanced data flow and accessibility within
the systems.
 Performed Informatica Intelligent Cloud Services (IICS), Informatica Power Center Administration ETL strategies,
and ETL Informatica mapping. Setting up of Secure Agent and connecting different applications and its Data
Connectors for processing the different kinds of data including unstructured (logs, click streams, Shares, likes, topics,
etc..), semi-structured (XML, JSON), and structured like RDBMS.
 Developed ETL (Extract, Transform, Load) pipelines to populate the datfa mart with relevant and valuable data from
diverse operational and transactional databases, ensuring data consistency and accuracy throughout the data
integration process
 Designed and implemented ETL procedures with Snowflake, including data integration, transformation, and loading
using Snowflake's tools like Snowpipe and SnowSQL.
 Configured and managed Snowflake virtual warehouses, adjusting warehouse sizes and computing resources
dynamically to accommodate fluctuating data processing workloads and analytical query demands, ensuring cost-
effective and scalable data warehouse operations
 Shown proficiency in designing and provisioning cloud resources including virtual machines, networks, storage, and
load balancers using Terraform to define and manage Infrastructure as Code (IaC) to provision and manage Azure
resources Azure Resource Manager (ARM) templates.
 Dealing with version control systems like Git, establishing collaborative workflows like Infrastructure Code (IaC)
review and continuous integration/continuous deployment (CI/CD) pipelines, and using Terraform in a team
context.
 Extract Transform and Load (ETL) data from Sources Systems to Azure Data Storage services using a combination of
Azure Data Factory, T-SQL, and Spark SQL. Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure
Storage, Azure SQL, Azure DW) and processing using Databricks.
 Understanding of SAS MACROS, SAS Advance, SAS Grid, SAS Enterprise Guide, and BASE SAS.
 Outstanding business communication abilities. understanding of the theories and techniques of data warehousing the
capacity to create intricate SAS programs with procedures and macros.
 Utilized Palantir Foundry to design and develop scalable data integration pipelines, enabling seamless data flow and
transformation across multiple systems and platforms.
 Develop and deploy the outcome using spark and Scala code in Hadoop cluster running on GCP
 Extract, Transform, Load (ETL) processes were created and maintained to integrate data from many sources into
OLAP databases, guaranteeing data accuracy, completeness, and consistency.
 Understanding of Azure Synapse pipelines, which let you coordinate and automate data-processing operations across
many Azure services.
 Defined and optimized data mart schema designs and structures, facilitating fast and efficient data querying and
reporting for various business units and stakeholders across the organization
 Was involved in setting up of apache airflow service in GCP
 Developed interactive and visually appealing dashboards in PowerBI to provide stakeholders with comprehensive
insights into complex datasets, facilitating data-driven decision-making processes
 Utilized Azure PowerShell to implement Azure automation solutions
 Used Python libraries for data analysis, such as NumPy, Pandas, Requests, urllib3, Pyarrow.
 Build data pipelines in airflow in GCP for ETL related jobs using different airflow operators.
 Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform and load data from different
sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool, and backward.
 Importing and exporting data into HDFS and hive using Sqoop and Kafka with batch-streaming, Impala
 Deployed Pyspark applications and developed them in the Databricks cluster.
 Developed data integration programs, data pipelines, ETL procedures, or data quality checks using Talend.
 Competence with Talend software, including Talend Cloud, Talend Open Studio, Talend Data Mapper, and Talend
Studio.
 Demonstrated expertise in Palantir Foundry's data governance functionalities, ensuring compliance with data
security and privacy regulations while maintaining data accessibility and usability for authorized users.
 Achieved enhancing data quality, cutting processing time, or boosting productivity while working with Talend.
 Utilized Relational Database Management Systems (RDBMS), such as Oracle, MySQL, or SQL Server, to design and
create OLTP databases.
 Deep understanding of moving data into GCP using SQOOP process, using custom hooks for MySQL, using cloud data
fusion for moving data from Teradata to GCS
 OLTP databases' performance was tuned and optimized, increasing transaction throughput and
 Develop and deploy the outcome using spark and Scala code in Hadoop cluster running on GCP
 A Docker-based solution for scaling and optimizing data processing workflows on a Windows Server cluster has
been successfully implemented, leading to a 20% decrease in processing time and increased dependability.
 Docker volumes and containers were used in the design and maintenance of a Windows Server-based data storage
infrastructure, ensuring effective and secure data access for numerous teams and applications.
 Imported the data from various formats like JSON, ORC, Sequential, Text, CSV, AVRO, and Parquet to HDFS cluster
with compressed for optimization.
 Expertise in designing and creating various analytical reports and Automated Dashboards to help users to identify
critical KPIs using Power BI and facilitate strategic planning in the organization.
 Used CI/CD tool Jenkins for code deployment and scheduling of jobs.
 Implemented Snowflake's data sharing and data replication features to facilitate secure data collaboration and
exchange between different business units and external partners, enhancing data-driven decision-making and
strategic collaboration across the organization.
 Designed and optimized Snowflake data models and schemas, ensuring efficient data organization and retrieval for
complex analytical queries and business intelligence reporting tasks.
ENVIRONMENT: Snowflake, Hive, pyspark, Git, Jenkins, Kafka, Sqoop, Jira, SQL, Scala, JUnit, MySQL, Power BI,
Databricks, Python, Azure, Azure Data Lake, Adobe Analytics, Docker.

Client: State Street Mar 2019 - Sep 2022


Role: Data Engineer

Responsibilities:
 Involved in gathering business requirements, logical modeling, physical database design, data sourcing and data
transformation, data loading, SQL, SSIS, and performance tuning.
 Experienced in handling different optimization join operations like Map join, Sorted Bucketed Map join, etc.
 Use Amazon Elastic Cloud Compute (EC2) infrastructure for computational tasks and Simple Storage Service (S3) as a
storage mechanism.
 Experience in optimizing data pipelines for SAP data integration, resulting in improved efficiency and streamlined
data flow within the Snowflake ecosystem.
 Leveraged Palantir Foundry's advanced data modeling and visualization capabilities to generate actionable insights
and facilitate data-driven decision-making processes within the organization.
 Leveraged GovCloud services for efficient data processing and analytics, incorporating technologies like AWS Lambda,
Amazon S3, and Amazon Redshift for data warehousing solutions
 Implemented Informatica Test Data Management (TDM) solutions within Cisco's data environment, enabling the
efficient and secure provisioning of test data for software development and quality assurance purposes.
 Using SAS Data Integration Studio and other SAS tools to develop and design data integration solutions.
 SAS macros and stored procedures were created and maintained to simplify data processing and increase code reuse.
 Developed and maintained data pipelines that transferred data from source systems into SAS data stores while
enhancing the performance and scalability of data storage and retrieval.
 Developed Automation Regressing Scripts for validation of ETL process between multiple databases like AWS
Redshift, MongoDB, T-SQL, and SQL Server using Python.
 Performed end-to-end Architecture & implementation assessment of various AWS Cloud services like Amazon EMR,
Redshift, IAM, RDS, and Athena.
 The ability to build data pipelines in AWS using tools like AWS Glue, AWS Data Pipeline, or Amazon Kinesis to
extract, transform, and load (ETL) data from NoSQL databases to other data stores or analytics services.
 Developed data governance frameworks, including data classification, data quality, data lineage, and data retention
regulations, in collaboration with cross-functional teams.
 Coordinated with management and technical services staff to complete the data migration.
 Designed and created a future state solution for the Data Lake's Treasury Data Warehouse.
 Designed the project and oversaw a team that intergraded investment data into the Treasury Data Lake.
 Updated our Data Architecture standards in accordance with the Data Lake platform.
 Facilitated deployment of the multi-clustered environment using AWS EC2 and EMR apart from deploying Dockers for
cross-functional deployment.
 Used AWS Redshift as it offers a number of performances tuning options, including sort keys, distribution styles, and
query optimization.
 By choosing the proper table distribution type and sort keys, we used AWS Redshift to maximize the efficiency of
queries.
 Collaborated with business analysts and stakeholders to identify key data mart requirements and dimensions,
ensuring the inclusion of critical business metrics and performance indicators for informed decision-making and
strategic planning
 Successfully extracted, transformed, and loaded data from SAP systems, including expertise in handling various data
structures and modules within SAP, such as SAP ERP, SAP HANA, and SAP BW
 Used Tableau for Data Integration to aggregate data from several sources into a single dashboard and how you have
used data integration and data blending.
 Designing Interactive Dashboards in Tableau by developing and building intuitive, user-friendly interactive
dashboards in Tableau.
 Understanding of the best practices for data visualization, including color theory, layout design, and typography, and
applied these concepts to Tableau dashboards.
 Used AWS Glue as a data engineer to build ETL jobs that convert data by performing different operations like filtering,
aggregating, and merging data. Custom transformations written in Python or Scala are also supported by AWS Glue.
 Created and improved SQL queries with the aid of tools like SQL Profiler and Query Analyzer for quick and effective
data retrieval and processing in OLTP.
 Worked extensively with AWS services like EC2, S3, VPC, ELB, Auto Scaling Groups, Route 53, IAM, CloudTrail,
CloudWatch, CloudFormation, CloudFront, SNS, and RDS.
 Used AWS Simple workflows and AWS step functions for automating and scheduling our data pipelines.
 Experience with Snowflake cloud data warehouse and AWS S3 bucket for integrating data from multiple source
systems which includes loading nested JSON formatted data into snowflake table.
 Implemented PoC for using APACHE IMPALA for data processing on top of HIVE.
 Worked with the different data sources that the Oracle HCM system receives, such as personnel records, payroll
information, and performance measures, would be necessary to do this.
 Developed Apache Pig scripts and UDFs extensively for data transformations and calculating Statement date formats
and aggregating the monitory transactions.
 Built reusable Hive UDF libraries for business requirements which enabled usage of UDFs in Hive queries.
 In order to safeguard the sensitive data stored within the Oracle HCM system, I charged of devising and putting
security measures in place.
Environment: Hadoop (CDH5), UNIX, Scala, Apache Airflow, Python, Snowflake, SAS, Storm, Databricks, Spark-SQL, Map
Reduce, Apache Pig, Hive, Impala, Java, Eclipse, Kafka, MySQL, and Oozie, AWS, EMR, S3, Oracle HCM.

Client: Consolidated Edison Inc, California Oct 2017 - Mar


2019
Role: Data Engineer

Responsibilities:
 Used Azure Blob Storage, Azure Files, and Azure Queues are just a few of the scalable and secure data storage
choices offered by this service.
 Experience working in the spark ecosystem using Spark-Sql, and Scala queries on different formats like Text files and
CSV files.
 Expertise using and adhering to well-known Git processes, including Gitflow, Feature Branch Workflow, and Forking
Workflow, assuring productive teamwork and seamless code integration.
 Designed and implemented a comprehensive data mart solution within Consolidated Edison Inc's data infrastructure,
enabling efficient data storage, retrieval, and analysis for business intelligence and reporting purposes
 Knowing how to connect Azure Synapse to several data sources, such as Azure Cosmos DB and Azure Blob Storage,
to process and ingestion data in real-time.
 Demonstrated proficiency in integrating SAP data with various data warehouse solutions, ensuring smooth and
efficient data transfer and synchronization processes
 Used Azure Stream Analytics to enable the development of sophisticated event processing pipelines and is used for
real-time data processing and analysis.
 Familiarity with PySpark libraries including NumPy, Pandas, and SciPy, as well as the concepts of RDDs (Resilient
Distributed Datasets) and Spark Data Frame.
 Understanding of how to deploy containerized applications using Azure DevOps in conjunction with other Azure
services like Azure Functions, Azure Kubernetes Service (AKS), and Azure Container Registry (ACR).
 Snowflake usage made it easy for processing and analysing huge amounts of data in real-time or batch mode by
supporting a range of data processing frameworks, including Apache Spark, Apache Hive, and Apache NiFi for Data
Processing
Environment: MapR, Hadoop MapReduce, Snowflake, Git, HDFS, Spark, SQL, Eclipse HBase, Shell Scripting, Scala, AWS
cloud, Apache NiFi, Python, LINUX.

Client: Rosemont, Illinois Oct 2015 - Oct 2017


Role: Hadoop Developer

Responsibilities:
 Used Azure Blob Storage, Azure Files, and Azure Queues are just a few of the scalable and secure data storage
choices offered by this service.
 Develop New Spark SQL ETL logics in Big Data for the migration and availability of the Facts and Dimensions used for
the Analytics.
 Develop of Spark SQL application, Big Data Migration from Teradata to Hadoop and reduce Memory utilization in
Teradata analytics.
 Implemented and maintained data pipelines on AWS, utilizing services like Amazon S3, AWS Glue, and AWS Lambda
to efficiently extract, transform, and load data for analytics and reporting purposes.
 Develop the Spark SQL logics, which mimics the Teradata ETL logics, and point the output Delta back to Newly
Created Hive Tables and as well the existing Teradata Dimensions, Facts, and Aggregated Tables.
 Involved in creating views on top of HIVE tables for analytics, and actively participated in upgrading Tableau platforms
in a clustered environment, including performing content upgrades.
 Monitored Resources and Applications using AWS Cloud Watch, including creating alarms to monitor metrics such as
EBS, EC2, ELB, RDS, S3, EMR, IAM, Athena, Glue, SNS and configured notifications for the alarms generated based on
events defined.
 Designed, developed, and implemented pipelines using Python API (PySpark) of Apache Spark on AWS EMR.
 Created, modified and executed DDL in table AWS Redshift and snowflake tables to load data.
 Designed and developed ETL process using Informatica tool to load data from a wide range of sources such as Oracle,
Flat files, Salesforce, AWS Cloud.
 Designed and managed scalable data storage solutions on GovCloud, employing technologies such as Amazon EBS,
Amazon EC2, and Amazon RDS for efficient and reliable data storage and retrieval
 Collected and aggregated large amounts of web log data from different sources such as web servers, mobile and
network devices using Apache and stored the data into HDFS for analysis.
 Involved in making Hive tables, stacking information, composing hive inquiries, producing segments and basins for
enhancement.
 Implemented data governance and security measures on AWS, ensuring compliance with industry standards and
regulations.
 The custom File System plugin allows Hadoop MapReduce programs, HBase, Pig and Hive to work unmodified and
access files directly.
 Developed and optimized ETL (Extract, Transform, Load) pipelines to process large volumes of data from various
sources and load it into data warehousing solutions on AWS.
 Used AWS Athena extensively to import structured data from S3 into multiple systems, including Redshift, and to
generate reports. For constructing the common learner data model, which obtains data from Kinesis in near real time,
we used Spark-Streaming APIs to perform necessary conversions and operations on the fly.
 Developed Snowflake views to load and unload data from and to an AWSS3 bucket, as well as transferring the code to
production.
 Monitored data mart usage and data access patterns, identifying potential data mart usage bottlenecks and
performance constraints, and implementing proactive measures to mitigate data mart performance issues and ensure
uninterrupted data availability
 Implemented real-time data processing solutions using Apache Kafka and Apache Storm, enabling the ingestion,
processing, and analysis of high-volume streaming data for real-time business insights.
 Created and managed data lakes on AWS, utilizing services like Amazon Athena and AWS Glue to enable efficient
data exploration and analysis.
 Proficiently administered user permissions, user groups, and scheduled instances for reports in Tableau,
demonstrated strong expertise in creating and monitoring clusters on Hortonworks Data Platform, and efficiently
developed UNIX shell scripts to load a large number of files into HDFS from the Linux File System.
 Design, develop, test, implement and support of Data Warehousing ETL using Abinitio and Hadoop Technologies.
 Connected different data sources to Tableau like Oracle JD Edwards, Salesforce.com, IBM Cognos, and SQL Server &
Excel for live connection on various Dashboards.

Environment: Snowflake, Hadoop, HDFS, Python, UNIX, Shell Scripting, Teradata, Spark SQL, Kafka, AWS, Talend.

Client: Sphinx Solutions, Pune May 2013 - Aug 2014


Role: ETL Developer

Responsibilities:
 Developed Advance PL/SQL packages, procedures, triggers, functions, Indexes and Collections to implement business
logic using SQL Navigator.
 Generated server-side PL/SQL scripts for data manipulation and validation and materialized views for remote
instances.
 Created management analysis reporting using Parallel Queries, Java Stored Procedures. Participated in change and
code reviews to understand the testing needs of the change components. Worked on troubleshooting defects in a
timely manner.
 Involved in defragmentation of tables, partitioning, compressing and indexes for improved performance and
efficiency. Involved in table redesigning with implementation of Partitions and Partition Indexes to make database
performance and easier to maintain.
 Experience in Database Application Development, Query Optimization, Performance Tuning and DBA solutions and
implementation experience in complete System Development Life Cycle.
 Used Informatica Power Center Designer to analyze the source data to Extract & Transform from various source
systems by incorporating business rules using different objects and functions that the tool supports.
 Used Power Center Designer to create mappings and mapplets to transform the data according to the business rules.
 Used various transformations like Source Qualifier, Joiner, Lookup, SQL, Router, Filter, Expression and Update
Strategy etc.
 Conducted regular data mart backups and disaster recovery tests, ensuring the availability of historical data snapshots
and the resilience of the data mart infrastructure against potential data loss or system failures
 Created and configured Workflows and Sessions to transport the data to target Oracle tables using Informatica
Workflow Manager.
 Implemented complex business rules in Informatica Power Center by creating re-usable transformations, and robust
Mapplets.
 Implemented performance tuning of Sources, Targets, Mappings and Sessions by identifying bottlenecks and used
debuggers to debug the complex mappings and fix them.
 Designed and developed Informatica Workflow to extract data from XML files and loaded it into the database.
 Worked along with the UNIX team for writing UNIX shell scripts to customize the server scheduling jobs.
 Involved in ETL code using PL/SQL in order to meet requirements for Extract, transformation, cleansing and loading of
data from source to target data structures.
 Involved in the continuous enhancements and fixing of production problems. Designed, Implemented and Tuned
interfaces and batch jobs using PL/SQL.

Environment: Oracle 10g/11g, SQL Plus, TOAD, SQL Loader, SQL Developer, PL/SQL, Informatica Power Center, Designer,
Workflow Manager, Workflow Monitor, Repository Manager, Shell Scripts, UNIX, Windows XP, Splunk, HTML, TOAD,
XML.

EDUCATION:

Bachelor of Technology in Computer Science 2009 - 2013


Vignan University, Visakhapatnam, India.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy