IT数码 购物 网址 头条 软件 日历 阅读 图书馆
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
图片批量下载器
↓批量下载图片,美女图库↓
图片自动播放器
↓图片自动播放器↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁
 
   -> 人工智能 -> PCT的 part_seg 模型 详细信息 记录 -> 正文阅读

[人工智能]PCT的 part_seg 模型 详细信息 记录

PointTransformerSeg(
(backbone): Backbone(
(fc1): Sequential(
(0): Linear(in_features=19, out_features=32, bias=True)
(1): ReLU()
(2): Linear(in_features=32, out_features=32, bias=True)
)
(transformer1): TransformerBlock(
(fc1): Linear(in_features=32, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=32, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
(transition_downs): ModuleList(
(0): TransitionDown(
(sa): PointNetSetAbstraction(
(mlp_convs): ModuleList(
(0): Conv2d(35, 64, kernel_size=(1, 1), stride=(1, 1))
(1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1))
)
(mlp_bns): ModuleList(
(0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(1): TransitionDown(
(sa): PointNetSetAbstraction(
(mlp_convs): ModuleList(
(0): Conv2d(67, 128, kernel_size=(1, 1), stride=(1, 1))
(1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1))
)
(mlp_bns): ModuleList(
(0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(2): TransitionDown(
(sa): PointNetSetAbstraction(
(mlp_convs): ModuleList(
(0): Conv2d(131, 256, kernel_size=(1, 1), stride=(1, 1))
(1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
)
(mlp_bns): ModuleList(
(0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(3): TransitionDown(
(sa): PointNetSetAbstraction(
(mlp_convs): ModuleList(
(0): Conv2d(259, 512, kernel_size=(1, 1), stride=(1, 1))
(1): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1))
)
(mlp_bns): ModuleList(
(0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
)
(transformers): ModuleList(
(0): TransformerBlock(
(fc1): Linear(in_features=64, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=64, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
(1): TransformerBlock(
(fc1): Linear(in_features=128, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=128, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
(2): TransformerBlock(
(fc1): Linear(in_features=256, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=256, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
(3): TransformerBlock(
(fc1): Linear(in_features=512, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=512, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
)
)
(fc2): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
(3): ReLU()
(4): Linear(in_features=512, out_features=512, bias=True)
)
(transformer2): TransformerBlock(
(fc1): Linear(in_features=512, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=512, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
(transition_ups): ModuleList(
(0): TransitionUp(
(fc1): Sequential(
(0): Linear(in_features=512, out_features=256, bias=True)
(1): SwapAxes()
(2): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): SwapAxes()
(4): ReLU()
)
(fc2): Sequential(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): SwapAxes()
(2): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): SwapAxes()
(4): ReLU()
)
(fp): PointNetFeaturePropagation(
(mlp_convs): ModuleList()
(mlp_bns): ModuleList()
)
)
(1): TransitionUp(
(fc1): Sequential(
(0): Linear(in_features=256, out_features=128, bias=True)
(1): SwapAxes()
(2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): SwapAxes()
(4): ReLU()
)
(fc2): Sequential(
(0): Linear(in_features=128, out_features=128, bias=True)
(1): SwapAxes()
(2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): SwapAxes()
(4): ReLU()
)
(fp): PointNetFeaturePropagation(
(mlp_convs): ModuleList()
(mlp_bns): ModuleList()
)
)
(2): TransitionUp(
(fc1): Sequential(
(0): Linear(in_features=128, out_features=64, bias=True)
(1): SwapAxes()
(2): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): SwapAxes()
(4): ReLU()
)
(fc2): Sequential(
(0): Linear(in_features=64, out_features=64, bias=True)
(1): SwapAxes()
(2): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): SwapAxes()
(4): ReLU()
)
(fp): PointNetFeaturePropagation(
(mlp_convs): ModuleList()
(mlp_bns): ModuleList()
)
)
(3): TransitionUp(
(fc1): Sequential(
(0): Linear(in_features=64, out_features=32, bias=True)
(1): SwapAxes()
(2): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): SwapAxes()
(4): ReLU()
)
(fc2): Sequential(
(0): Linear(in_features=32, out_features=32, bias=True)
(1): SwapAxes()
(2): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): SwapAxes()
(4): ReLU()
)
(fp): PointNetFeaturePropagation(
(mlp_convs): ModuleList()
(mlp_bns): ModuleList()
)
)
)
(transformers): ModuleList(
(0): TransformerBlock(
(fc1): Linear(in_features=256, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=256, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
(1): TransformerBlock(
(fc1): Linear(in_features=128, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=128, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
(2): TransformerBlock(
(fc1): Linear(in_features=64, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=64, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
(3): TransformerBlock(
(fc1): Linear(in_features=32, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=32, bias=True)
(fc_delta): Sequential(
(0): Linear(in_features=3, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(fc_gamma): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
)
(w_qs): Linear(in_features=512, out_features=512, bias=False)
(w_ks): Linear(in_features=512, out_features=512, bias=False)
(w_vs): Linear(in_features=512, out_features=512, bias=False)
)
)
(fc3): Sequential(
(0): Linear(in_features=32, out_features=64, bias=True)
(1): ReLU()
(2): Linear(in_features=64, out_features=64, bias=True)
(3): ReLU()
(4): Linear(in_features=64, out_features=50, bias=True)
)
)

  人工智能 最新文章
2022吴恩达机器学习课程——第二课(神经网
第十五章 规则学习
FixMatch: Simplifying Semi-Supervised Le
数据挖掘Java——Kmeans算法的实现
大脑皮层的分割方法
【翻译】GPT-3是如何工作的
论文笔记:TEACHTEXT: CrossModal Generaliz
python从零学(六)
详解Python 3.x 导入(import)
【答读者问27】backtrader不支持最新版本的
上一篇文章      下一篇文章      查看所有文章
加:2021-11-19 17:37:57  更:2021-11-19 17:38:48 
 
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁

360图书馆 购物 三丰科技 阅读网 日历 万年历 2024年11日历 -2024/11/27 4:39:20-

图片自动播放器
↓图片自动播放器↓
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
图片批量下载器
↓批量下载图片,美女图库↓
  网站联系: qq:121756557 email:121756557@qq.com  IT数码