HI,I am running a tf inference task on my cluster,but I flind it took so long a time to get response, becase it is a bert model and I run it on cpu machine.My componey has gpu k8s cluster,and I read the document https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/advanced/external_resources/ count you give me a demo?Including tf inference on gpu and train on gpu? I use alink in some of my task, is there a demo for alink on gpu?*来自志愿者整理的flink
Hi, 目前有一个MNIST Inference的Demo[1]是使用GPU的,但是没有用到TensorFlow. 在flink-ai-extended项目中有个TensorFlow训练MNIST的例子[2],但不确定能否直接用GPU版TF执行。我帮你involve一下Becket和Wei来确认下。
[1] https://github.com/KarmaGYZ/flink-mnist [2] https://github.com/alibaba/flink-ai-extended/tree/master/deep-learning-on-flink/flink-ml-examples/src/main/java/com/alibaba/flink/ml/examples/tensorflow/mnist*来自志愿者整理的flink
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。