This project is a simple implementation of a simple web server using the http
module in Node.js. The server is able to handle GET and POST requests, and is able to serve static files. The server is also able to handle requests for a simple REST API.
- Author: [Ricardo Barona]
- Date: 2024-05-01
- Version: 1.0.0
- Status: Completed
- Topic: Web Server
- [Paper] [Appendix] ESM - Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences: https://www.pnas.org/doi/full/10.1073/pnas.2016239118
- [Paper] [Supplementory] ESMFold (Evolutionary-scale prediction) of atomic-level protein structure with a language model: https://www.science.org/doi/10.1126/science.ade2574
- Evolutionary Scale Modeling Repo: https://github.com/facebookresearch/esm
- [Paper] Mutation effects predicted from sequence co-variation: https://www.nature.com/articles/nbt.3769
- [Paper] Mega-scale experimental analysis of protein folding stability in biology and design: https://www.nature.com/articles/s41586-023-06328-6
- **[Paper] Language models enable zero-shot prediction of the effects of mutations on protein function https://openreview.net/pdf?id=uXc42E9ZPFs
- [Paper] TRANSFORMER PROTEIN LANGUAGE MODELS ARE UNSUPERVISED STRUCTURE LEARNERS: https://openreview.net/pdf?id=fylclEqgvgd
- [Paper] xTrimoPGLM: Unified 100B-Scale Pre-trained Transformer for Deciphering the Language of Protein: https://www.biorxiv.org/content/10.1101/2023.07.05.547496v1.full.pdf
- [Paper] AlphaFold2: https://www.nature.com/articles/s41586-021-03819-2
- [Paper] AlphaFold 1: https://www.nature.com/articles/s41586-019-1923-7
- AlphaFold: https://alphafold.ebi.ac.uk/
- AlphaFold2 Importance: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8592092/
- [Paper] RoseTTAFold: https://www.ipd.uw.edu/2021/07/rosettafold-accurate-protein-structure-prediction-accessible-to-all/
- [Paper] xTrimoPGLM: https://www.biorxiv.org/content/10.1101/2023.07.05.547496v1.full.pdf
- [Paper] Mixtral of Experts: https://arxiv.org/abs/2401.04088
- [Paper] Mixture-of-Experts Meets Instruction Tuning: https://arxiv.org/abs/2305.14705
- @lucidrains github mixture-of-experts: https://github.com/lucidrains/mixture-of-experts
- Mistral 8x7b hugginface: https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1/tree/main
- Mistral 8x7b MoE: https://mistral.ai/news/mixtral-of-experts/
- Mistral MoE guardrailing: https://docs.mistral.ai/platform/guardrailing/
- MoE: https://www.tensorops.ai/post/what-is-mixture-of-experts-llm
- MoE: https://deepgram.com/learn/mixture-of-experts-ml-model-guide
- The Shift from Models to Compound AI Systems: https://bair.berkeley.edu/blog/2024/02/18/compound-ai-systems/
- [Paper] OUTRAGEOUSLY LARGE NEURAL NETWORKS: https://arxiv.org/abs/1701.06538
- [Paper] Mixture-of-Experts with Expert Choice Routing: https://arxiv.org/abs/2202.09368
- [Paper] The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits: https://arxiv.org/abs/2402.17764