A new era of self-intelligence network: a big-model-driven future

the 2023 AI network innovation conference held in Beijing intelligent network model on the BBS, zte cable products model senior architect Ji’an-guo lu made the wisdom network new era: big model drive the future of the theme of the zte through fine tuning directional model ability to enhance the quality of the corpus, and use digital twin automation data cycle, to apply large model to the intelligence network intelligent level of practice.
Lu Jianguo said that many key technologies, such as AI enabling, digital twin and intention drive, will support the intelligence level of the self-intelligence network from L4 to L5, and make the self-intelligence network continue to iterate and evolve to complete self-intelligence. Among these key technologies, AI is the most important engine, and large models are the key in AI technology.
In how to apply large model to self-intelligence network, Lu Jianguo introduced that large model has super generation ability and can quickly generate a large number of schemes. For the intellectual network operations such a need to implement a large number of operation steps, equivalent to in high dimensional space to find the optimal solution, solution set for all possible processes, large model for general solutions such as NP (not polynomial) problem, a large number of samples, evaluation, optimization, iteration can play efficient pruning, quickly approach the optimal solution. However, although large models generate many schemes, it is difficult to ensure that these schemes are useful. Even though large models have certain thinking ability, they still need human intervention when dealing with complex logic. In order to solve this problem, ZTE suggests integrating expert experience in the process of incremental pre-training and fine tuning of the model to form a closed-loop iteration. In this way, a smooth transition from manual feedback reinforcement learning to tool feedback reinforcement learning can be realized, which can effectively utilize the generation capacity of large models on the one hand, and on the other hand, ensure that the generated diagnostic scheme is accurate and reliable. In this scheme, it is a key link to build the operation and maintenance knowledge map combined with knowledge engineering. The generation of the data flywheel scheme is based on the operation and maintenance knowledge map, so as to avoid the model illusion and ensure the reliability and accuracy of the generation scheme. This knowledge graph-based approach can better integrate expert experience and model generation capabilities to provide more reliable solutions.

1222608496226784797
For the application logic design of the large model, Lu Jianguo further introduced that ZTE will adopt the model-driven closed-loop method based on Prompt engineering. The essence of design is to take the structured expression of human language (Prompt template) as input, generate the structured output (arrangement scheme) through the large model, and finally combine the interactive execution of the application framework. In order to realize the above logic, ZTE will make technical preparations from many aspects, such as multi-modal capability evolution, corpus preparation, resource relationship knowledge graph knowledge injection, atomic API corpus reserve / atomic API capability reserve, building artificial simulation fault environment, digital twin automatic fault simulation environment, and tool preparation.
Lu Jianguo finally said that the main value of the big model lies in its emergence ability, that is, it can generate innovation by combining existing knowledge. However, the realization of this emergent capacity depends on high-quality data production, acceptance, and precipitation. A virtuous cycle of data is the determining factor.


Post time: Nov-20-2023