We develop Large Language Models tailored to accelerate high-performance computing (HPC) workflows. Our research focuses on code parallelization, GPU code generation, and performance optimization using AI-based methodologies. We collaborate with leading partners to integrate LLMs into real-world environments.
We apply fine-tuned Large Language Models to advance scientific computing tasks in biology, such as RNA editing prediction and sequence analysis. In addition, we develop Attention Networks tailored for specific biological problems, enhancing interpretability and performance. Our work integrates domain-specific knowledge with generative AI techniques, providing novel tools for biological discovery and computational biology research
Our research harnesses computer vision models for scientific computing, primarily in physics and materials science, but also extending to broader scientific domains. We develop novel methods for texture segmentation, structure analysis, and predictive modeling of physical systems, with applications in material characterization, experimental physics, and generalized scientific contexts.