(EN) The invention discloses a FPGA-based neural network operation method and device, equipment and a storage medium. The method comprises the following steps: acquiring a neural network model; counting on-chip memory capacities corresponding to the plurality of FPGAs; according to the on-chip memory capacity of each FPGA, splitting the neural network model into sub-models with corresponding data volumes, wherein the data volume of each sub-model is not greater than the on-chip memory capacity of the corresponding FPGA; distributing the sub-models to on-chip memories of the corresponding FPGAs; andsetting a data flow direction among the corresponding FPGAs according to an execution sequence among the sub-models, and sequentially controlling the FPGAs to execute neural network operation based on the corresponding sub-models according to the execution sequence. And the overall efficiency of executing the reasoning operation of the neural network model based on the FPGA is ensured. In addition, the invention further provides a FPGA-based neural network operation device and equipment and a storage medium, and the beneficial effects are the same as described above.
(ZH) 本申请公开了一种基于FPGA的神经网络运算方法、装置、设备及存储介质。该方法的步骤包括:获取神经网络模型;统计多个FPGA对应的片上内存容量;根据各FPGA的片上内存容量,将神经网络模型拆分为具有相应数据量的子模型;其中,各子模型的数据量不大于所对应的FPGA的片上内存容量;将子模型分配至对应FPGA的片上内存;根据各子模型之间的执行顺序设定相应各FPGA之间的数据流向,并根据执行顺序依次控制各FPGA基于相应的子模型执行神经网络运算。确保了基于FPGA执行神经网络模型的推理运算的整体效率。此外,本申请还提供一种基于FPGA的神经网络运算装置、设备及存储介质,有益效果同上所述。