-
职位职责: 1、调研数据管理与安全方向前沿技术领域成果; 2、参与创新型数据管理与安全项目的设计与开发;提供基于性能、稳定、扩展、安全等方面规划技术架构与落地; 3、按照项目计划,按时提交高质量代码,完成开发任务; 4、在**学术会议(如SIGMOD, VLDB, CCS)发表论文。 职位要求: 1、研究生学历(硕士、博士),计算机科学与技术相关专业,数据结构和数据库基础扎实; 2、熟悉Unix/Linux操作系统,精通C/C++、Go、Java等主流语言;具有良好的编程能力和习惯; 3、熟悉数据库原理,了解数据安全,隐私保护的常见方案和手段; 4、逻辑能力与接受新技术能力强;具备良好的学习能力和分析解决问题能力,责任心强; 5、在**学术会议(如SIGMOD、VLDB、CCS)发表过论文。 加分项: 1、在安全、AI/大模型、数据管理交叉领域有研究经验优先;有关系型数据库/NoSQL/区块链内核开发经验; 2、熟悉区块链原理和技术系统,如Bitcoin、Etheurum、Libra、Hyperledger Fabric等;熟悉区块链数据库相关系统原理和组件,如QLDB、LedgerDB、Oracle blockchain table、Microsoft SQL Ledger等; 3、熟悉掌握公钥密码体系常用的加解密算法、认证签名算法、哈希算法、安全协议设计、访问控制模型、软件安全理论等; 4、安全领域研究与实践经历:包括但不限于TEE可信执行环境、安全多方计算、差分隐私、同态加密、门限密码体系、函数加密、可搜索加密、零知识证明、协议构造安全性证明、区块链密码学应用等;有相关密码算法优化和密码工程实践者、密码库实现者、大型密码应用项目经验者、以及程序竞赛获奖者优先。
-
职位职责: 1、负责数据中心硬件设备下阶物料品类采购供应商寻源、选择、合同谈判、价格谈判、资质评估,风险管理; 2、深入了解市场趋势及行业生态,拓展、维护合作伙伴关系,根据业务需求制定采购策略; 3、挖掘供应商生态协作能力,协调内外部资源,建立与供应商稳定的合作关系,保障供应持续性。 职位要求: 1、本科及以上学历; 2、熟悉供应链及采购管理,多年数据中心服务器或交换机下阶物料采购经验; 3、具有较强的沟通协调及快速学习能力,多任务处理和独立决策能力; 4、逻辑清晰,工作严谨,有责任心。
-
25k-40k 经验10年以上 / 本科教育 / 不需要融资 / 500-2000人岗位职责: Job Title: Head (Data Intelligence Department) Department: Data Intelligence Department, HKUST(GZ) As a higher education institution committed to innovation and data-driven decision-making, HKUST(GZ) is establishing a new Data Intelligence Department aimed at harnessing the power of data to enhance operational efficiency and strategic growth across all levels of the organization. We are seeking a visionary leader to head this critical initiative. The ideal candidate will possess a blend of strategic vision, technical expertise, and leadership skills to build and nurture a high-performing team dedicated to data governance, data analytics and business intelligence. Duties 1. Drive cooperation among various departments to develop and implement a university-wide data governance strategy that aligns with the University's mission and strategic goals. 2. Establish the framework of data governance and develop long-term plans for the development of AI/HPC Data Center and Smart Campus, and promote the utilization of computing technologies and improvement of energy efficiency to achieve the vision of a "Sustainable Smart Campus". 3. Develop and promote the implementation of campus data governance policies, including the standards of the allocation and usage of computing resources and smart campus services, policies for the entire data lifecycle management as well as category-based and class-based data security and protection. 4. Lead the innovation in the management and services of the AI/HPC Data Center to meet the computational demands of research/teaching and future development of the society and enhance the university’s scientific research capabilities by improvement of operational efficiency and service capabilities. 5. Oversee the operation and intelligent upgrades of Smart Campus systems including multimedia and research/teaching facilities, office and life services, security access, and smart buildings, ensuring efficient operation and interaction of the systems, and upgrading the design of intelligent campus services. 6. Supervise the compliance of data to ensure it meets both domestic and international data protection laws and standards,ethical norms, and the university’s data confidentiality requirements. 7. Guide the data teams from various University's departments to establish and optimize data processing workflows, foster a culture of data-driven decision-making, promote data accumulation and then to form data assets and driving continuous institutional improvement through the strategic use of data across the University. 任职要求: A qualified candidate should: 1. Hold a bachelor's degree or above in Data Science, Computer Science, or a related field. 2. Have at least 10 years of relevant work experience, including data management and team leadership in large organizations. 3. Possess excellent communication and teamwork skills, able to collaborate across departments to promote the implementation of data policies. 4. Be familiar with domestic and international data security and compliance policies and related regulations, such as China's Data Security Law, China's Cybersecurity Law, etc., with experience in handling data compliance in multiple jurisdictions. 5. Have strong strategic planning, organizational management, innovation management, and change implementation capabilities. This is a Mainland appointment, and the appointee will be offered a contract by HKUST(GZ) entity in accordance with the Mainland labor laws and regulations. Starting salary will be commensurate with qualifications and experience.
-
工作职责 1.Perform an assessment on all visualization and reporting requirements and develop long term strategy for various dashboard & reporting solutions. 2.Effective communication with business partners to understand their needs, and design reports accordingly 3.Collect and understand business logic behind all the reports to translate it into data model design requirement 4.Manage projects, prepare updates and implement all phases for a project 5.Turn data into insights with actionable execution plans and influence key stakeholders to implement the solutions 6.Provide training to business teams on BI tool usage and dashboard creation 任职要求 1.Proficient in using SQL, Python and R for data manipulation and analysis 2.Experienced in data visualization and dashboard development with tools like Tableau 3.Excellent presentation, project management and people management skills 4.Bachelor degree or above in statistics, business analytics, mathematics, computer or relevant education with training on data and analytics 5.5+ years of experience of managing analytics&BI projects, with successful project experiences in using data to drive business value for senior analyst. 6.Experience in data governance, data quality management, data processing and insights 7.Strong in data visualization tool such as Tableau, PowerBI.etc), as well as in BI portal products 8.Excellent in planning and organization of project, able to use data to identify and solve problems 9.Experience in retail, CRM, supply chain and production is a plus
-
Staff Analytics Engineer, Data
[北京·国贸] 2025-10-0950k-58k 经验10年以上 / 本科营销服务|咨询 / B轮 / 50-150人Responsibilities -Own & deliver cross-team analytics epics end-to-end (often multi-quarter):scoping, design, implementation, rollout, and adoption, with minimal oversight -Set technical direction for our analytics/BI layer(Looker+dbt+Trino/Spark) and data products; lead design reviews and establish guardrails (cost, reliability, privacy, inclusion) -Model and govern data:design stable contracts(schemas/SLAs), manage lineage, and evolve domain models that unlock self-service and performance at scale -Optimize performance & cost across engines (Trino, Spark/Databricks):plan-level analysis, join/partitioning strategies, aggregation layers, caching/materialization; set SLOs with monitoring/alerting -Raise the bar on engineering quality:testing, CI/CD, documentation, privacy/security, on-call hygiene; lead incident reviews and drive permanent fixes -Mentor & multiply:coach engineers/analysts, delegate effectively, and contribute to recruiting while holding the bar Qualifications -Education: Bachelor's degree or higher in Computer Science or a related technical field, or equivalent practical experience. -Experience: 8–12+ years in data/analytics engineering or adjacent DE/BI roles, including 5+ years owning production semantic models & transformations and 3+ years leading cross-team initiatives end-to-end -SQL & performance:Expert SQL with the ability to read/act on query plans (distributed + warehouse). Proven wins on TB-scale data (e.g., ≥2× latency reduction or ≥30% cost savings) via partitioning, file formats, pruning, aggregations, and caching/materialization -dbt at scale:Operated mid-to-large dbt projects (≈100+ models), using incremental models, tests, exposures, macros/packages, CI/CD, and data contracts; strong documentation and naming standards -Looker semantic layer:Owned LookML modeling across multiple domains; shipped governed explores/measures for 100+ users, with version control, code review, release process, and change management that enable self-service analytics - Engines & storage:Hands-on with Trino/Presto and/or Spark/Databricks (distributed plans, join strategies, partitioning, autoscaling); comfortable with Parquet/Iceberg table layouts and query-aware modeling -Reliability & governance:You set SLOs for BI/analytics surfaces, establish monitoring/alerting, manage lineage & SLAs, and run post-incidents to land permanent fixes -Leadership:Self-directed;sets technical direction for a domain, drives multi-quarter epics, mentors multiple engineers/analysts, leads design reviews, and raises the hiring/promo bar -Software fundamentals:Proficient Python and data tooling; strong testing, CI/CD, code review hygiene; privacy/security awareness -AI/LLM enablement:Experience designing or integrating AI-assisted analytics (e.g., chat-to-SQL over a semantic layer, RAG on dbt/Looker docs) with guardrails for access control/PII and an evaluation plan; can quantify adoption or ticket reduction Nice to Have -Ad-tech domain expertise(RTB auction dynamics, mediation, attribution, and LTV) -Production ops for analytics infra:GitOps (Argo CD), IaC (Terraform), Kubernetes-based data services; incident playbooks for data/BI -Streaming & CDC: Kafka/Kinesis with Flink or Spark Structured Streaming to power near-real-time analytics -JVM stack:Scala/Java for Spark jobs/UDFs or high-throughput data services -Feature/ML data interfaces: feature marts or stores (e.g., Feast), batch/online syncing, model telemetry hooks Privacy & governance at scale: row/column-level security, tokenization, policy-as-code; familiarity with GDPR/CCPA impacts -Data observability&lineage tooling:Datadog, Prometheus/Grafana, OpenLineage/DataHub/Amundsen; automated freshness/volume/uniqueness checks -Experimentation:Experience building the foundations for A/B testing-event definitions, consistent metrics, and safeguards for valid results -
35k-45k·13薪 经验3-5年 / 博士数据服务,软件开发 / 不需要融资 / 150-500人Alvanon HK, Ltd. As the leading global apparel business and product development consultancy, we help our clients grow sales and improve profitability by aligning their internal teams and strategic suppliers, engaging consumers, and implementing world class innovations throughout their product development processes and supply chains. We sincerely invite analytical, energetic and self-motivated individuals to join Alvanon. We are looking for candidates for the following position: Machine Learning/Data Science Manager You will be working at our head office in Hong Kong, developing new products relating tobody fit and fashion technology using engineering and machine learning techniques. You will be leading the machine learning team, working closely with other stakeholders to create innovative solutions and products with the latest machine learning technologies. Responsibilities ● Lead and drive the team to deliver AI/ML solutions. ● Identify and communicate with key stakeholders and understand their requirements. ● Research and develop AI/ML solutions that have real impact on our business/products. ● Implement machine learning/deep learning models and bring them into production. ● Work with data scientists and software developers through the entire R&Dpipeline (problem analysis, idea generation, data collection, model prototyping, evaluation, implementation, deployment and maintenance). ● Maintain and improve in-house interface for application prototype, model visualization and evaluation. Requirements ● Doctor Degree in Computer Science/Information System/Statistics or any related quantitative disciplines ● A minimum of 5 years of working experience in the relevant field ● Understanding of common machine learning models/algorithms, such as SVM, logistic regression, Bayesian regression, Neural Networks, etc. ● Basic understanding of probability, statistics, optimization, data structure and algorithms ● Hands-on experience in machine learning/deep learning tools, such as PyTorch, TensorFlow, Keras ● Strong communication skills in English, Mandarin and Cantonese ● Research experience in AI/ML/CV/CG in companies or university research labs is a plus Personality ● Have effective communication skills ● Have sense of ownership of work and desire to deliver great products To apply, please
-
We are seeking a data professional with combined expertise in data analysis and data warehousing to join our dynamic team. The ideal candidate will focus on leveraging data analysis to drive overall data initiatives. This role will be responsible for designing and optimizing data models, handling complex datasets, and improving data quality and processing efficiency. We expect the candidate to work independently while also collaborating closely with the team as needed to drive data-driven business growth. Key Responsibilities: - Design and maintain data warehouse architecture to support various business lines, including but not limited to Attractions, Mobility, and Hotels. - Develop a deep understanding of each business line and use data analysis to support business decisions and strategy development. - Build and optimize data models to ensure data accuracy and reliability. - Independently handle and optimize complex datasets, enhancing data quality and processing workflows. - Collaborate closely with both business and technical teams to ensure data solutions meet business requirements. - Write technical documentation and maintain the data warehouse’s data dictionary. Qualifications: - Over 5 years of experience in the data warehouse field. - Proficient in SQL and database technologies, with hands-on experience managing large-scale databases. - Strong experience in data model construction, with the ability to independently design and optimize complex data models. - Extensive experience in data quality and underlying data processing, with the ability to effectively resolve data issues. - Familiarity with DBT and practical experience is preferred. - Strong analytical thinking and problem-solving skills, with the ability to complete projects independently. - Excellent communication and teamwork skills, capable of effectively interacting with team members from diverse backgrounds.
-
Your role QIMA has a 35%/year growth pace, 20% thanks to acquisitions. It is paramount that we manage to integrate quickly our new acquired companies so that they can extend the benefit of our state-of-the-art data management & dashboards to our new clients and colleagues. Data integration plays a key role in this integration: how do we manage to understand quickly and unambiguously the data of the newly acquired company? How do we connect this data to our existing data flows? Data plays a key role at QIMA: as we master the entire data flow, from data collection (with our own inspectors, auditors, and labs), data processing (BI and data scientists) and data actionability (insights for our customers and for our managers). Data is spread around different departments and expertise inside the company: Marketing, Operations, Sales, IT, … Data governance is key, and collaboration around data will unlock the potential to bring even more value to our customers about the quality of their products, and to our managers about their operations. These main challenges about data lead us to look for our Head of Data. In this role, your main responsibilities will be, but not limited to: -Project Management oImagine the business cases of the data projects by exchanging with stakeholders, and deliver them oLead the transversal projects around our datawarehouse and cloud ETL oLead the Master Data Management projects, leveraging the key skills and technologies already in place across departments oDrive the data integration of newly acquired companies within QIMA group in order to synchronize reporting dashboards and provide a transversal understanding of the business oLead the discussion and integration projects with external partners oTrack results and provide continuous improvement oBe responsible for budget, roadmap, quality and delivery of these projects -People Management oManage the Data Engineering and the Business Intelligence teams -Community Animation oAnimate data governance across domains and departments oBe the guardian of data quality in the group, challenge data inconsistencies and ensure that data is shared by all departments in any circumstances oImplement knowledge sharing practices inside the data community oBe responsible of data lineage and data quality -Management of the run oCooperate with our IT support organization to create the support of the newly created systems oOrganize and manage the day-to-day operation and support of this system Requirements: In order to succeed in this role, you must have: -Master's degree in computer science -Extensive experience and knowledge on Data solution architecting -Experience in transversal projects -Are a hands-on person, autonomous, at ease to discuss with a CEO or a field operator -Open minded, agile with change and pragmatic -Ability to drive a workstream and train final users -Can work in a multinational environment and on multiple simultaneous projects -Strong communication skills, both oral and written -Excellent teamwork and interpersonal skills -Are fluent in English: daily use required with our colleagues all over the world; -If you are based in Europe, you are willing and able to travel.
-
Do you have experience in architecting data at a global organization scale? Do you enjoy working on cutting edge technologies while supporting end-users achieving their business goals? About QIMA You will be linking our new companies in Americas, our teams in Europe (mostly France) and our teams in Asia (China, Hong Kong, Philippines). Your role QIMA has a 35%/year growth pace, 20% thanks to acquisitions. It is paramount that we manage to integrate quickly our new acquired companies so that they can extend the benefit of our state-of-the-art data management & dashboards to our new clients and colleagues. Data integration plays a key role in this integration: how do we manage to understand quickly and unambiguously the data of the newly acquired company? How do we connect this data to our existing data flows? Data plays a key role at QIMA: as we master the entire data flow, from data collection (with our own inspectors, auditors, and labs), data processing (BI and data scientists) and data actionability (insights for our customers and for our managers). Data is spread around different departments and expertise inside the company: Marketing, Operations, Sales, IT, … Data governance is key, and collaboration around data will unlock the potential to bring even more value to our customers about the quality of their products, and to our managers about their operations. These main challenges about data lead us to look for our Head of Data. In this role, your main responsibilities will be, but not limited to: - Project Management o Imagine the business cases of the data projects by exchanging with stakeholders, and deliver them o Lead the transversal projects around our datawarehouse and cloud ETL o Lead the Master Data Management projects, leveraging the key skills and technologies already in place across departments o Drive the data integration of newly acquired companies within QIMA group in order to synchronize reporting dashboards and provide a transversal understanding of the business o Lead the discussion and integration projects with external partners o Track results and provide continuous improvement o Be responsible for budget, roadmap, quality and delivery of these projects - People Management o Manage the Data Engineering and the Business Intelligence teams - Community Animation o Animate data governance across domains and departments o Be the guardian of data quality in the group, challenge data inconsistencies and ensure that data is shared by all departments in any circumstances o Implement knowledge sharing practices inside the data community o Be responsible of data lineage and data quality - Management of the run o Cooperate with our IT support organization to create the support of the newly created systems o Organize and manage the day-to-day operation and support of this system Requirements: In order to succeed in this role, you must have: - Master's degree in computer science - Extensive experience and knowledge on Data solution architecting - Experience in transversal projects - Are a hands-on person, autonomous, at ease to discuss with a CEO or a field operator - Open minded, agile with change and pragmatic - Ability to drive a workstream and train final users - Can work in a multinational environment and on multiple simultaneous projects - Strong communication skills, both oral and written - Excellent teamwork and interpersonal skills - Are fluent in English: daily use required with our colleagues all over the world; - If you are based in Europe, you are willing and able to travel. We offer: a competitive package, performance bonus, fast career progression and international career opportunities.
-
职责描述: ? Develop AI and machine learning solutions to optimise business performance across different areas of the organisation; ? Build exploratory analysis to identify the root causes of a particular business problem; ? Explore and analyze all the to understand customer behavior and provide better customer experience; ? Use data analytics to help in-country Marketing, Distribution, Operation, compliance and HR teams to increase revenue, lower costs and improve operational efficiency; ? Identify actionable insights, suggest recommendations, and influence the direction of the business of cross functional groups; ? Build MIS to track the performance of implemented models to validate the effectiveness of them and show the financial benefits to the organization; ? Support the data analytics team on data extraction and manipulation; ? Help decision analytics team on the creation of documentations of all analytical solutions developed; ? Build data infrastructure necessary for the development of predictive models and more sophisticated data analysis; ? Development of data analysis for different business units 任职要求: ? Studying a numerate discipline e.g. Actuarial science, Mathematics, Statistics, Engineering, or Computer Science with a strong computer programming component; ? Demonstrable understanding of data quality risks and ability to carry out necessary quality checks to validate results obtained; ? Knowledge of a variety of machine learning techniques (clustering, decision tree, random forest, artificial neural networks, etc.) ? Proven hands-on experience in the use of at least one advanced data analysis platform (e.g. Python, R); ? Sound knowledge of programming in SQL or other programming experience; ? Articulate, with excellent oral and written communication skills. Fast learner with a willing attitude. Intellectually rigorous, with strong analytical skills and a passion for data;
-
数据工程师/Data Engineer (英语流利)
[广州·黄埔区] 2023-06-2013k-26k 经验3-5年 / 大专软件服务|咨询,IT技术服务|咨询 / 不需要融资 / 50-150人外企银行 双休 不加班 实际有五个岗位: 4-9年:数据分析,deep learning, NLP, transformer model 3-6年:数据开发,Hadoop, Spark, Kafka, SQL, Flink 3-6年:数据开发,Python, SQL, pandas, database 2-6年:Machine Learning Engineer,Python, Docker, CI/CD, Machine Learning and/or ML model monitoring experience a plus 6-9年:Experience in HDFS, Map Reduce, Hive, Impala, Sqoop, Linux/Unix technologies. Spark is an added advantage. Job Duties & Responsibilities: • Support finance regulatory reporting projects & applications as ETL developer | Big Data applications by following Agile Software development life cycle | level 2/3 support of the data warehouse. -
1. Who you are As a person you are motivated to continuously develop and enhance programming skills and knowledge of machine learning and AI applications. The IKEA Business and our values and how they apply to the data management process is your passion. Furthermore, you are energized by working both independently and interdependently with the architecture network and cross functions. Appreciate the mix of strategic thinking and turning architecture trends into practice. Last but not least you share and live the IKEA culture and values. In this role you have proven advanced training in (computer) engineering, computer science, econometrics, mathematics or equivalent. You have experience and knowledge of working with large datasets and distributed computing architectures. You have the knowledge of coding (R and/or Python) and development of artificial intelligence applications. You have experience in statistical analysis and statistical software. In addition, you are confident in data processing and analysis. You have demonstrable experience in working in an Agile or DevOps working set-up You have knowledge in following areas: • Knowledge of data set processes for data modelling, mining and production, as well as visualization tools e.g. Tableau • Knowledge of at least one building block of artificial intelligence • Knowledge of machine learning development languages (R and/or Python) and other developments in the fast moving data technology landscape e.g. Hive or Hadoop • Knowledge of DevOps and agile development practices • Exposure to development of industrialised analytical software products • Knowledge of IKEAs corporate identity, core values and vision of creating a better everyday life for the many people We believe that you are able to work with large amounts of raw data and feel comfortable with working with different programming languages. High intellectual curiosity with ability to develop new knowledge and skills and use new concepts, methods, digital systems and processes to improve performance. The ability to provide input to user-story prioritisation as appropriate based on new ideas, approaches, and strategies. The ability to understand the complexity of IKEA business and the role of data and information as an integrated part of the business. 2. Your responsibilities As Data Scientist you will perform predictive and prescriptive modelling and help develop and deploy artificial intelligence and machine learning algorithms to reinvent and grow the IKEA business in a fast changing digital world. You will: • Explore and examine data for use in predictive and prescriptive modelling to deliver insights to the business stakeholders and support them to make better decisions • Influence product and strategy decision making by helping to visualize complex data and enable change through knowledge of the business drivers that make the product successful • Support senior colleagues with modelling projects, identifying requirements and build tools that are statistically grounded, explainable and able to adapt to changing attributes • Support the development and deployment of different analytical building blocks that support the requirements of the capability area and customer needs • Use root cause research to identify process breakdowns and provide data through the use of various skill sets to find solutions to the breakdown • Work closely with other data scientists and across functions to help produce all required design specifications and ensure that data solutions work together and fulfill business needs • Work across initiatives within INGKA Group, steering towards data-driven solutions
-
The Role We are looking for a Data Engineer to be part of our Applications Engineering team. This person will design, develop, maintain and support our Enterprise Data Warehouse & BI platform within Tesla using various data & BI tools, this position offers unique opportunity to make significant impact to the entire organization in developing data tools and driving data driven culture. Responsibilities: • Work in a time constrained environment to analyze, design, develop and deliver Enterprise Data Warehouse solutions for Tesla’s Sales, Delivery and Logistics Teams. • Setting up, maintaining and optimizing bigdata platform for production usage in reporting, analysis and ML applications. • Establish scalable, efficient, automated processes for data analyses, model development, validation and implementation. • Create ETL pipelines using Spark/Flink. • Create real time data streaming and processing using Open source technologies like Kafka , Spark etc. • Develop collaborative relationships with key business sponsors and IT resources for the efficient resolution of work requests. • Provide timely and accurate estimates for newly proposed functionality enhancements critical situation. • Develop, enforce, and recommend enhancements to Applications in the area of standards, methodologies, compliance, and quality assurance practices; participate in design and code walkthroughs. Qualifications: Minimum Qualifications: • Proficient experience in building large scale Spark batch applications. • Must have strong experience in Data Warehouse ETL design and development, methodologies, tools, processes and best practices. • Strong experience in stellar dashboards and reports creation for C-level executives. • Have a good understanding on one module, one id and one service. Preferred Qualifications: • 3+ years of development experience in Open Source technologies like Scala, Java. • Excellent Experience with Hortonworks/Cloudera platforms. • Practical experience in using HDFS. • Relevant working experience with Docker and Kubernetes preferred.
-
Ad. Specialist Big Data Engineer AD
[北京·后沙峪] 2023-04-2830k-35k·15薪 经验3-5年 / 本科汽车丨出行 / 未融资 / 500-2000人BMW is focusing on the development of automated driving (AD) with great effort. In order to develop and test highly intelligent software for environmental perception, maneuver planning and acting, which finally enables the self-driving capability of an autonomous car, a huge amount of environmental sensor data needs to be collected, hosted and processed in an AD Development Platform (AD Data Center). The processing covers e.g. broad data analytics, reprocessing and KPI evaluation, simulation, virtual endurance run, etc. The BMW AD team in China is fully integreated into the global network of highly educated experts of different domains. In this job you will play a crucial role at the interfaces between BMW AD SW development and IT teams as well as Chinese internet tech-companies and high-tech AD suppliers. Big data, distributed data and RaaS(Reprocessing as a Service) (e.g. MapReduce, HDFS,Airflow,Jenkins,CI\CD etc.,) will be your core focus. Your prime target will be to accelerate the AD product development by analyse FTs requriments and designing big data architectures , tasking the development of related big data applications and managing their deployment to a massive off-site data center BMW just developed in cooperation with a big Chinese Tech-Player. In order to achieve this target you will be exposed to a complex project environment involving both BMW internal and external partners. You will be working with are Docker, Openshift, Kubernetes, Grafana, SPARK etc. switch global environment to *****. Major Responsibilities: -Design, deployment and improvement of distributed, big data systems targeting automated driving applications; -Tasking the development and improvement of off-board backend big data applications and related communication protocols targeting automated driving use cases; -Management and structuring of requirements provided by BMW's AD development teams regarding the big data AD platform; -Steering of BMW China AD development teams and external cooperation partners (big Chinese Tech-Player) regarding data driven development; -Review, monitor and report the status of own project; -Support in the budget planning process for the organization unit; -Research, recommend, design and develop our Big Data Architecture. Ensures system, technical and product architectures are aligned with China data platfrom strategy; Qualifications: -Master/Bachelor’s degree in Computer Science,Electrical/Electronic Engineering, Information Technology or another related field or Equivalent; -Good communication skills, good language skills in English; German language skills optional and appreciated; -Rich experience in big data processing Hadoop/Spark ecosystem applications like Hadoop, Hive, Spark and Kafka preferred; -Solid programming skills, like Java, Scala or Python; -Rich experience in docker and Kubernetes; -Familiar with CI/CD tools, like Jenkins and Ansible; -Substantial knowledge and experience in software development and engineering in (distributed big data) cloud/ compute/ data lake/datacenter systems (MAPR,Openshift,docker etc.,) -Experience with scheduling tools is a plus, like Airflow and Oozie preferred; -Experience with AD data analysis e.g. Lidar, Radar, image/video data and CAN bus data. -Experience in building and maintaining data quality standards and processes; -Strong background working with Linux/UNIX environments; -Strong Shell/Perl/Ruby/Python scripting experience; -Experience with other NoSQL databases is a plus, like Elasticsearch and HBase preferred; -Solid communication skills with ability to communicate sophisticated technical concepts and align on decisions with global and China ***** partners; -Passion to stay on top of the latest happenings in the tech world and an attitude to discuss and bring those into play; -Must have strong hands-on experience in data warehouse ETL design and development, be able to build scalable and complex ETL pipelines for different source data with different format; -
Data Analysis数据分析工程师
[广州·岭南] 2023-03-2712k-18k 经验1-3年 / 大专信息安全,数据服务 / 不需要融资 / 15-50人Roles & Responsibilities Responsible for designing, developing, and maintaining business intelligence solutions using Tableau. Manage MS SQL databases and develop SQL scripts – views and stored procedures. Work on SSIS platform for data integration and workflow applications. Creating data models to support the design and development of Tableau reports and dashboards. Performing unit testing of reports and dashboards to ensure that they meet the specifications and requirements. Collaborate cross-functional teams, business analysts and stakeholders to understand the data requirements and design visualizations that provide insights and support decision-making. Documentation if the design, development, and maintenance of reports and dashboards, such as creating user manuals and training materials for end-users. Job Requirements To have minimum of 2 years of relevant experience Knowledge of SQL & Tableau Is a MUST requirement Knowledge of MS SQL/ SSIS is a must. Knowledge in Excel VBA/ Python is a big advantage. Skills in Microsoft office like Word, PowerPoint and Excel is essential. Strong verbal and written communication.


