成为钢铁侠!只需一块RTX3090,微软开源贾维斯(J.A.R.V.I.S.)人工智能AI助理系统

简介: 梦想照进现实,微软果然不愧是微软,开源了贾维斯(J.A.R.V.I.S.)人工智能助理系统,贾维斯(jarvis)全称为Just A Rather Very Intelligent System(只是一个相当聪明的人工智能系统),它可以帮助钢铁侠托尼斯塔克完成各种任务和挑战,包括控制和管理托尼的机甲装备,提供实时情报和数据分析,帮助托尼做出决策等等。如今,我们也可以拥有自己的贾维斯人工智能助理,成本仅仅是一块RTX3090显卡。

梦想照进现实,微软果然不愧是微软,开源了贾维斯(J.A.R.V.I.S.)人工智能助理系统,贾维斯(jarvis)全称为Just A Rather Very Intelligent System(只是一个相当聪明的人工智能系统),它可以帮助钢铁侠托尼斯塔克完成各种任务和挑战,包括控制和管理托尼的机甲装备,提供实时情报和数据分析,帮助托尼做出决策等等。

如今,我们也可以拥有自己的贾维斯人工智能助理,成本仅仅是一块RTX3090显卡。

贾维斯(Jarvis)的环境配置

一般情况下,深度学习领域相对主流的入门级别显卡是2070或者3070,而3090可以算是消费级深度学习显卡的天花板了:

再往上走就是工业级别的A系列和V系列显卡,显存是一个硬指标,因为需要加载本地的大模型,虽然可以改代码对模型加载进行“阉割”,但功能上肯定也会有一定的损失。如果没有3090,也可以组两块3060 12G的并行,显存虽然可以达标,但算力和综合性能抵不过3090。

确保本地具备足以支撑贾维斯(Jarvis)的硬件环境之后,老规矩,克隆项目:

git clone https://github.com/microsoft/JARVIS.git

随后进入项目目录:

cd JARVIS

修改项目的配置文件 server/config.yaml:

openai:  
  key: your_personal_key # gradio, your_personal_key  
huggingface:  
  cookie: # required for huggingface inference  
local: # ignore: just for development  
  endpoint: http://localhost:8003  
dev: false  
debug: false  
log_file: logs/debug.log  
model: text-davinci-003 # text-davinci-003  
use_completion: true  
inference_mode: hybrid # local, huggingface or hybrid  
local_deployment: minimal # no, minimal, standard or full  
num_candidate_models: 5  
max_description_length: 100  
proxy:   
httpserver:  
  host: localhost  
  port: 8004  
modelserver:  
  host: localhost  
  port: 8005  
logit_bias:  
  parse_task: 0.1  
  choose_model: 5

这里主要修改三个配置即可,分别是openaikey,huggingface官网的cookie令牌,以及OpenAI的model,默认使用的模型是text-davinci-003。

修改完成后,官方推荐使用虚拟环境conda,Python版本3.8,私以为这里完全没有任何必要使用虚拟环境,直接上Python3.10即可,接着安装依赖:

pip3 install -r requirements.txt

项目依赖库如下:

git+https://github.com/huggingface/diffusers.git@8c530fc2f6a76a2aefb6b285dce6df1675092ac6#egg=diffusers  
git+https://github.com/huggingface/transformers@c612628045822f909020f7eb6784c79700813eda#egg=transformers  
git+https://github.com/patrickvonplaten/controlnet_aux@78efc716868a7f5669c288233d65b471f542ce40#egg=controlnet_aux  
tiktoken==0.3.3  
pydub==0.25.1  
espnet==202301  
espnet_model_zoo==0.1.7  
flask==2.2.3  
flask_cors==3.0.10  
waitress==2.1.2  
datasets==2.11.0  
asteroid==0.6.0  
speechbrain==0.5.14  
timm==0.6.13  
typeguard==2.13.3  
accelerate==0.18.0  
pytesseract==0.3.10  
gradio==3.24.1

这里web端接口是用Flask2.2高版本搭建的,但奇怪的是微软并未使用Flask新版本的异步特性。

安装完成之后,进入模型目录:

cd models

下载模型和数据集:

sh download.sh

这里一定要做好心理准备,因为模型就已经占用海量的硬盘空间了,数据集更是不必多说,所有文件均来自huggingface:

models="  
nlpconnect/vit-gpt2-image-captioning  
lllyasviel/ControlNet  
runwayml/stable-diffusion-v1-5  
CompVis/stable-diffusion-v1-4  
stabilityai/stable-diffusion-2-1  
Salesforce/blip-image-captioning-large  
damo-vilab/text-to-video-ms-1.7b  
microsoft/speecht5_asr  
facebook/maskformer-swin-large-ade  
microsoft/biogpt  
facebook/esm2_t12_35M_UR50D  
microsoft/trocr-base-printed  
microsoft/trocr-base-handwritten  
JorisCos/DCCRNet_Libri1Mix_enhsingle_16k  
espnet/kan-bayashi_ljspeech_vits  
facebook/detr-resnet-101  
microsoft/speecht5_tts  
microsoft/speecht5_hifigan  
microsoft/speecht5_vc  
facebook/timesformer-base-finetuned-k400  
runwayml/stable-diffusion-v1-5  
superb/wav2vec2-base-superb-ks  
openai/whisper-base  
Intel/dpt-large  
microsoft/beit-base-patch16-224-pt22k-ft22k  
facebook/detr-resnet-50-panoptic  
facebook/detr-resnet-50  
openai/clip-vit-large-patch14  
google/owlvit-base-patch32  
microsoft/DialoGPT-medium  
bert-base-uncased  
Jean-Baptiste/camembert-ner  
deepset/roberta-base-squad2  
facebook/bart-large-cnn  
google/tapas-base-finetuned-wtq  
distilbert-base-uncased-finetuned-sst-2-english  
gpt2  
mrm8488/t5-base-finetuned-question-generation-ap  
Jean-Baptiste/camembert-ner  
t5-base  
impira/layoutlm-document-qa  
ydshieh/vit-gpt2-coco-en  
dandelin/vilt-b32-finetuned-vqa  
lambdalabs/sd-image-variations-diffusers  
facebook/timesformer-base-finetuned-k400  
facebook/maskformer-swin-base-coco  
Intel/dpt-hybrid-midas  
lllyasviel/sd-controlnet-canny  
lllyasviel/sd-controlnet-depth  
lllyasviel/sd-controlnet-hed  
lllyasviel/sd-controlnet-mlsd  
lllyasviel/sd-controlnet-openpose  
lllyasviel/sd-controlnet-scribble  
lllyasviel/sd-controlnet-seg  
"  
  
# CURRENT_DIR=$(cd `dirname $0`; pwd)  
CURRENT_DIR=$(pwd)  
for model in $models;  
do  
    echo "----- Downloading from https://huggingface.co/"$model" -----"  
    if [ -d "$model" ]; then  
        # cd $model && git reset --hard && git pull && git lfs pull  
        cd $model && git pull && git lfs pull  
        cd $CURRENT_DIR  
    else  
        # git clone 包含了lfs  
        git clone https://huggingface.co/$model $model  
    fi  
done  
  
datasets="Matthijs/cmu-arctic-xvectors"  
  
for dataset in $datasets;  
 do  
     echo "----- Downloading from https://huggingface.co/datasets/"$dataset" -----"  
     if [ -d "$dataset" ]; then  
         cd $dataset && git pull && git lfs pull  
         cd $CURRENT_DIR  
     else  
         git clone https://huggingface.co/datasets/$dataset $dataset  
     fi  
done

也可以考虑拆成两个shell,开多进程下载,速度会快很多。

但事实上,真的,别下了,文件属实过于巨大,这玩意儿真的不是普通人能耍起来的,当然选择不下载本地模型和数据集也能运行,请看下文。

漫长的下载流程结束之后,贾维斯(Jarvis)就配置好了。

运行贾维斯(Jarvis)

如果您选择下载了所有的模型和数据集(佩服您是条汉子),终端内启动服务:

python models_server.py --config config.yaml

随后会在系统的8004端口启动一个Flask服务进程,然后发起Http请求即可运行贾维斯(Jarvis):

curl --location 'http://localhost:8004/hugginggpt' \  
--header 'Content-Type: application/json' \  
--data '{  
    "messages": [  
        {  
            "role": "user",  
            "content": "please generate a video based on \"Spiderman is surfing\""  
        }  
    ]  
}'

这个的意思是让贾维斯(Jarvis)生成一段“蜘蛛侠在冲浪”的视频。

当然了,以笔者的硬件环境,是不可能跑起来的,所以可以对加载的模型适当“阉割”,在models\_server.py文件的81行左右:

other_pipes = {  
            "nlpconnect/vit-gpt2-image-captioning":{  
                "model": VisionEncoderDecoderModel.from_pretrained(f"{local_fold}/nlpconnect/vit-gpt2-image-captioning"),  
                "feature_extractor": ViTImageProcessor.from_pretrained(f"{local_fold}/nlpconnect/vit-gpt2-image-captioning"),  
                "tokenizer": AutoTokenizer.from_pretrained(f"{local_fold}/nlpconnect/vit-gpt2-image-captioning"),  
                "device": "cuda:0"  
            },  
            "Salesforce/blip-image-captioning-large": {  
                "model": BlipForConditionalGeneration.from_pretrained(f"{local_fold}/Salesforce/blip-image-captioning-large"),  
                "processor": BlipProcessor.from_pretrained(f"{local_fold}/Salesforce/blip-image-captioning-large"),  
                "device": "cuda:0"  
            },  
            "damo-vilab/text-to-video-ms-1.7b": {  
                "model": DiffusionPipeline.from_pretrained(f"{local_fold}/damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16"),  
                "device": "cuda:0"  
            },  
            "facebook/maskformer-swin-large-ade": {  
                "model": MaskFormerForInstanceSegmentation.from_pretrained(f"{local_fold}/facebook/maskformer-swin-large-ade"),  
                "feature_extractor" : AutoFeatureExtractor.from_pretrained("facebook/maskformer-swin-large-ade"),  
                "device": "cuda:0"  
            },  
            "microsoft/trocr-base-printed": {  
                "processor": TrOCRProcessor.from_pretrained(f"{local_fold}/microsoft/trocr-base-printed"),  
                "model": VisionEncoderDecoderModel.from_pretrained(f"{local_fold}/microsoft/trocr-base-printed"),  
                "device": "cuda:0"  
            },  
            "microsoft/trocr-base-handwritten": {  
                "processor": TrOCRProcessor.from_pretrained(f"{local_fold}/microsoft/trocr-base-handwritten"),  
                "model": VisionEncoderDecoderModel.from_pretrained(f"{local_fold}/microsoft/trocr-base-handwritten"),  
                "device": "cuda:0"  
            },  
            "JorisCos/DCCRNet_Libri1Mix_enhsingle_16k": {  
                "model": BaseModel.from_pretrained("JorisCos/DCCRNet_Libri1Mix_enhsingle_16k"),  
                "device": "cuda:0"  
            },  
            "espnet/kan-bayashi_ljspeech_vits": {  
                "model": Text2Speech.from_pretrained(f"espnet/kan-bayashi_ljspeech_vits"),  
                "device": "cuda:0"  
            },  
            "lambdalabs/sd-image-variations-diffusers": {  
                "model": DiffusionPipeline.from_pretrained(f"{local_fold}/lambdalabs/sd-image-variations-diffusers"), #torch_dtype=torch.float16  
                "device": "cuda:0"  
            },  
            "CompVis/stable-diffusion-v1-4": {  
                "model": DiffusionPipeline.from_pretrained(f"{local_fold}/CompVis/stable-diffusion-v1-4"),  
                "device": "cuda:0"  
            },  
            "stabilityai/stable-diffusion-2-1": {  
                "model": DiffusionPipeline.from_pretrained(f"{local_fold}/stabilityai/stable-diffusion-2-1"),  
                "device": "cuda:0"  
            },  
            "runwayml/stable-diffusion-v1-5": {  
                "model": DiffusionPipeline.from_pretrained(f"{local_fold}/runwayml/stable-diffusion-v1-5"),  
                "device": "cuda:0"  
            },  
            "microsoft/speecht5_tts":{  
                "processor": SpeechT5Processor.from_pretrained(f"{local_fold}/microsoft/speecht5_tts"),  
                "model": SpeechT5ForTextToSpeech.from_pretrained(f"{local_fold}/microsoft/speecht5_tts"),  
                "vocoder":  SpeechT5HifiGan.from_pretrained(f"{local_fold}/microsoft/speecht5_hifigan"),  
                "embeddings_dataset": load_dataset(f"{local_fold}/Matthijs/cmu-arctic-xvectors", split="validation"),  
                "device": "cuda:0"  
            },  
            "speechbrain/mtl-mimic-voicebank": {  
                "model": WaveformEnhancement.from_hparams(source="speechbrain/mtl-mimic-voicebank", savedir="models/mtl-mimic-voicebank"),  
                "device": "cuda:0"  
            },  
            "microsoft/speecht5_vc":{  
                "processor": SpeechT5Processor.from_pretrained(f"{local_fold}/microsoft/speecht5_vc"),  
                "model": SpeechT5ForSpeechToSpeech.from_pretrained(f"{local_fold}/microsoft/speecht5_vc"),  
                "vocoder": SpeechT5HifiGan.from_pretrained(f"{local_fold}/microsoft/speecht5_hifigan"),  
                "embeddings_dataset": load_dataset(f"{local_fold}/Matthijs/cmu-arctic-xvectors", split="validation"),  
                "device": "cuda:0"  
            },  
            "julien-c/wine-quality": {  
                "model": joblib.load(cached_download(hf_hub_url("julien-c/wine-quality", "sklearn_model.joblib")))  
            },  
            "facebook/timesformer-base-finetuned-k400": {  
                "processor": AutoImageProcessor.from_pretrained(f"{local_fold}/facebook/timesformer-base-finetuned-k400"),  
                "model": TimesformerForVideoClassification.from_pretrained(f"{local_fold}/facebook/timesformer-base-finetuned-k400"),  
                "device": "cuda:0"  
            },  
            "facebook/maskformer-swin-base-coco": {  
                "feature_extractor": MaskFormerFeatureExtractor.from_pretrained(f"{local_fold}/facebook/maskformer-swin-base-coco"),  
                "model": MaskFormerForInstanceSegmentation.from_pretrained(f"{local_fold}/facebook/maskformer-swin-base-coco"),  
                "device": "cuda:0"  
            },  
            "Intel/dpt-hybrid-midas": {  
                "model": DPTForDepthEstimation.from_pretrained(f"{local_fold}/Intel/dpt-hybrid-midas", low_cpu_mem_usage=True),  
                "feature_extractor": DPTFeatureExtractor.from_pretrained(f"{local_fold}/Intel/dpt-hybrid-midas"),  
                "device": "cuda:0"  
            }  
        }

把不需要的模型屏蔽即可。

与此同时,还可以不通过本地服务接口,修改config.yaml的inference\_mode: huggingface

随后直接在终端运行ChatGPT版本的贾维斯:

cd server  
python3 awesome_chat.py --config config.yaml --mode cli

运行效果:

Welcome to Jarvis! A collaborative system that consists of an LLM as the controller and numerous expert models as collaborative executors. Jarvis can plan tasks, schedule Hugging Face models, generate friendly responses based on your requests, and help you with many things. Please enter your request (`exit` to exit).  
[ User ]: Given a collection of image A: /examples/cat.jpg, B: /examples/z3.jpg, C: /examples/z1.jpg, please tell me how many zebras in these picture?  
[ Jarvis ]: Based on your request, I have used several models to process the images you provided. For the image A: /examples/cat.jpg, I used the image-to-text model nlpconnect/vit-gpt2-image-captioning to generate the text "a cat sitting on a window sill looking out". Then I used the object-detection model facebook/detr-resnet-50 to generate the image with predicted box '/images/a3dc.jpg', which contains two objects with labels 'potted plant' and 'cat'. For the image B: /examples/z3.jpg, I used the image-to-text model nlpconnect/vit-gpt2-image-captioning to generate the text "two zebras standing in a field of dry grass". Then I used the object-detection model facebook/detr-resnet-50 to generate the image with predicted box '/images/f5df.jpg', which contains three objects with labels 'zebra'. For the image C: /examples/z1.jpg, I used the image-to-text model nlpconnect/vit-gpt2-image-captioning to generate the text "a zebra standing in a field of tall grass". Then I used the object-detection model facebook/detr-resnet-50 to generate the image with predicted box '/images/69c7.jpg', which contains one object with label 'zebra'. Finally, I used the canny-control model canny-control to generate the image '/images/7e18.png' based on the image /examples/savanna.jpg. Therefore, there are four zebras in these pictures. Is there anything else I can help you with?  
[ User ]: Please answer all the named entities in the sentence: Iron Man is a superhero appearing in American comic books published by Marvel Comics. The character was co-created by writer and editor Stan Lee, developed by scripter Larry Lieber, and designed by artists Don Heck and Jack Kirby.  
[ Jarvis ]: Sure, I understand your request. Based on the inference results of the models, I will answer your request step by step. Firstly, I used the token-classification model dslim/bert-base-NER to recognize the named entities in the sentence. The inference result is that the named entities are Iron Man, Stan Lee, Larry Lieber, Don Heck and Jack Kirby. Then, I used the question-answering model deepset/roberta-base-squad2 to answer your request. The inference result is that there is no output since the context cannot be empty. Therefore, I cannot make it. I hope this answer your request. Is there anything else I can help you with?  
[ User ]: Please generate a canny image based on /examples/savanna.jpg  
[ Jarvis ]: Sure. I have generated a canny image based on /examples/savanna.jpg. To do this, I first used the image-to-text model nlpconnect/vit-gpt2-image-captioning to generate the text description of the image, which is "a herd of giraffes and zebras grazing in a field". Then I used the canny-control model to generate the canny image of the original image, which is stored in the path "/images/1ea2.png". Finally, I used the canny-text-to-image model lllyasviel/sd-controlnet-canny to generate the canny image based on the text description and the canny image, which is stored in the path "/images/ba66.png". Do you need anything else?

可以理解为ChatGPT版本的文字生成图片,外面套了个贾维斯(Jarvis)的壳儿,演出效果满分。

结语

总的来说,和微软之前开源的“可视化图形ChatGPT”一样,此类项目的象征意义要远远大于现实意义。贾维斯(Jarvis)代表的是大多数技术同仁的共同愿景,对于这类人工智能技术的发展,可以肯定,但由于硬件门槛过高的原因,短期内还不能过于期待。

相关文章
|
10天前
|
机器学习/深度学习 人工智能 数据可视化
首个全自动科学发现AI系统,Transformer作者创业公司Sakana AI推出AI Scientist
【9月更文挑战第11天】Sakana AI公司近日推出全球首个全自动科学发现AI系统——AI Scientist,实现了人工智能在科学研究领域的重大突破。AI Scientist不仅能独立完成从假设提出到实验设计、数据分析及论文撰写的全过程,还能通过模拟评审提升研究成果的质量。该系统已成功应用于机器学习的多个子领域,并产出达到顶级会议标准的论文。尽管其高效性备受赞誉,但也引发了关于研究可信度和潜在风险的讨论。Sakana AI强调,系统具备可追溯的决策过程与严格的评审机制,确保了研究的可靠性和透明度。论文详情参见:[链接]。
31 6
|
2天前
|
机器学习/深度学习 传感器 人工智能
AI与未来医疗:重塑健康管理新格局随着人工智能(AI)技术的飞速发展,医疗行业正迎来一场前所未有的变革。AI不仅在数据分析、诊断支持方面展现出巨大潜力,还在个性化治疗、远程医疗等多个领域实现了突破性进展。本文将探讨AI技术在医疗领域的具体应用及其对未来健康管理的影响。
人工智能(AI)正在彻底改变医疗行业的面貌。通过深度学习算法和大数据分析,AI能够迅速分析海量的医疗数据,提供精准的诊断和治疗建议。此外,AI在远程医疗、药物研发以及患者管理等方面也展现出了巨大的潜力。本文将详细探讨这些技术的应用实例,并展望其对健康管理的深远影响。
|
1天前
|
人工智能 JSON 数据格式
RAG+Agent人工智能平台:RAGflow实现GraphRA知识库问答,打造极致多模态问答与AI编排流体验
【9月更文挑战第6天】RAG+Agent人工智能平台:RAGflow实现GraphRA知识库问答,打造极致多模态问答与AI编排流体验
RAG+Agent人工智能平台:RAGflow实现GraphRA知识库问答,打造极致多模态问答与AI编排流体验
|
1天前
|
人工智能 运维 安全
阿里云通过ISO42001人工智能管理认证,引领AI治理推动协同共治
9月19日,在杭州云栖大会「AI治理与安全论坛」上,阿里云宣布通过人工智能技术的全生命周期管理ISO42001体系认证。该项认证由国际标准化组织(ISO)和国际电工委员会(IEC)制定,是第一部可认证的人工智能国际管理体系标准。
|
10天前
|
机器学习/深度学习 人工智能 自然语言处理
AI人工智能大模型的架构演进
随着深度学习的发展,AI大模型(Large Language Models, LLMs)在自然语言处理、计算机视觉等领域取得了革命性的进展。本文将详细探讨AI大模型的架构演进,包括从Transformer的提出到GPT、BERT、T5等模型的历史演变,并探讨这些模型的技术细节及其在现代人工智能中的核心作用。
43 9
|
19天前
|
人工智能 运维 自然语言处理
AI战略丨构建未来: 生成式人工智能技术落地策略
GenAI 的技术落地需要企业进行周密地规划和持续地努力。企业必须从自身的战略出发, 综合考虑成本、效果和性能,制定合理的技术架构,通过全面的 AI 治理,实现可持续的创新和发展。
|
20天前
|
人工智能 监控 安全
AI计算机视觉笔记十三:危险区域识别系统
本文介绍了如何在 IPC 监控视频中实现区域入侵检测,通过 YOLOv5 和 ByteTrack 实现人物检测与多目标跟踪。系统能在检测到人员进入预设的危险区域时发出警报,保障安全。主要步骤包括:1)使用 YOLOv5 识别人物;2)使用 ByteTrack 进行多目标跟踪;3)利用射线法判断物体是否进入禁区内。项目基于 Python 开发,使用海思、君正、RK 等摄像头模组,代码已在 RV1126 上验证,计划移植至 RK3568 平台。项目结构清晰,包含模型训练、跟踪算法及图形化界面展示等功能。
|
13天前
|
机器学习/深度学习 人工智能 自然语言处理
探索人工智能:从基础理论到实践应用
【8月更文挑战第39天】在本文中,我们将深入探讨人工智能(AI)的基本概念、发展历程以及其在现实世界中的应用。我们将首先介绍AI的定义和主要分类,然后回顾其发展历史,最后通过一个实际的代码示例来展示AI的应用。无论你是AI领域的初学者还是有一定基础的学习者,这篇文章都将为你提供有价值的信息和启示。
|
3天前
|
机器学习/深度学习 人工智能 自然语言处理
人工智能在教育中的创新应用:个性化学习的未来
【9月更文挑战第18天】人工智能在教育中的创新应用正在深刻改变着我们的教学方式和学习体验。从个性化学习方案的制定到智能化辅导与反馈,从多元化学习资源的推荐到自动化评分与智能考试系统,AI技术正在为教育领域带来前所未有的变革。面对这一变革,我们需要以开放和批判的态度拥抱它,共同探索AI时代教育的无限可能,为每一个学习者创造更美好的未来。
48 12
|
4天前
|
数据采集 人工智能 自然语言处理
探索软件测试中的人工智能应用
在当今快速发展的技术世界中,软件测试作为确保软件质量的关键环节,正经历着前所未有的变革。随着人工智能技术的不断成熟和应用,其在软件测试领域的潜力逐渐显现,为提升测试效率、准确性和自动化水平提供了新的可能性。本文将深入探讨人工智能在软件测试中的应用现状、面临的挑战以及未来的发展趋势,旨在为读者提供一个关于AI如何改变软件测试行业的全面视角。