Scientific Knowledge Engineer Engineering - Raleigh, NC at Geebo

Scientific Knowledge Engineer

Alphanumeric is hiring a SCIENTIFIC KNOWLEDGE ENGINEER to work remotely out of the Research Triangle Park, NC area with our client of 20 years committed to improving lives through medical and pharmaceutical advancements.The Onyx Research Data Platform organization represents a major investment by R&D and Digital & Tech, designed to deliver a step- change in our ability to leverage data, knowledge, and prediction to find new medicines. We are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward:
Building a next-generation, metadata- and automation-driven data experience for the company's scientists, engineers, and decision-makers, increasing productivity and reducing time spent on 'data mechanics' Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent Aggressively engineering our data at scale, as one unified asset, to unlock the value of our unique collection of data and predictions in real-timeThe Scientific Knowledge Engineering team, which sits within the Onyx Product Management organization, is responsible for the data modeling, ontology definition and management, vocabulary mapping, and other key metadata activities that ensure Onyx platforms and data assets speak scientific language. They are a core factor in delivering the R&D Knowledge Graph - the semantic layer that connects all of our data and metadata systems - as well as the core metadata experiences that ultimately allow us to build products and services that both delight our customers and enable impressive automation and intelligence.This role is responsible for maximizing the value of our data assets over a lifetime to bring purpose to data by acting as translators of highly technical information from domain experts into an appropriate data model - complete with significant ontology and vocabulary -- that can be utilized to effectively structure and index the data. Specifically working with Product managers and R&D subject matter expertise to define the language (data models, ontology, standards, etc.) of science into data products by acting as the voice of 'Knowledgebase' and interoperability/value of asset. This includes responsibility for the understanding and translation of computational methods back through the data chain to maximize the quality and speed of data from source to drive experimental multi-variant analysis and data driven decision-making. Definition of schemas and data models of scientific information required for the creation of value adding data products. This includes accountability for the quality control and mapping specifications to be industrialized by data engineering and maintained in platform provisioned tooling. Accountable for the quality control (through validation and verification) of mapping specifications to be industrialized by data engineering and maintained in platform provisioned tooling - e.g., models, schemas, controlled vocab. Working with Product managers/engineers confidently convert business need into defined deliverable business requirements to enable the integration of large-scale biology data to predict, model, and stabilize therapeutically relevant protein complex and antigen conformations for drug and vaccine discovery. Collaborate with external groups to align company data standards with industry/ academic ontologies ensuring that data standards are defined with usage/analytics in mind. They may also provide data source profiling and advisory consultancy to R&D outside of Onyx. Support effective ingestion of data by the company through understanding the entry requirements required by platform engineering teams and ensuring that the 'barrier for entry' is met e.g. Scientific information has the appropriate metadata to be indexed, structured, integrated and standardized as needed. This may require articulation of company engineering standards and metadata information needs to third parties to ensure efficient and automate ingestion at scale. Provides bespoke subject matter expertise for R&D data to translate deep science into data for actionable insightsBasic Qualifications Bachelor's degree (Bioinformatics, Biomedical Science, Biomedical Engineering, Molecular Biology, or Computer Science) Biologist related work experience 5-8 years job-related experience with an established track record of delivery Working experience querying relational databases - SQL Experience with industry standard data management / metadata platforms e.g. Collibra, Datahub, Datum, Informatica Data modeling, quality, analysis, profiling (working experience with any data quality tool, SAS, Ataccama, Informatica Data Quality, Talend, OpenRefine) Experience with industry standard tools for building data protocols e.g. Avro, Protocol Buffers, Thrift Experience with at least one programming language - e.g. Python - for scripting vocabulary mappings, building data models, etc. Awareness of RDF, Ontology, reference dataPreferred Qualifications Demonstrated comfort operating and leading across organizational boundaries a matrixed team Membership of data standards group, industry committee, board, or consortium Specific experience with ontology, Knowledge Graph efforts Experience in technical writing, documentation Recommended Skills Artificial Intelligence Automation Avro Biology Biomedical Sciences Business Requirements Apply to this job. Think you're the perfect candidate? Apply on company site $(document).ready( function() $(#ads-desktop-placeholder).html(
n
n
n Estimated Salary: $20 to $28 per hour based on qualifications.

Don't Be a Victim of Fraud

  • Electronic Scams
  • Home-based jobs
  • Fake Rentals
  • Bad Buyers
  • Non-Existent Merchandise
  • Secondhand Items
  • More...

Don't Be Fooled

The fraudster will send a check to the victim who has accepted a job. The check can be for multiple reasons such as signing bonus, supplies, etc. The victim will be instructed to deposit the check and use the money for any of these reasons and then instructed to send the remaining funds to the fraudster. The check will bounce and the victim is left responsible.