Messi completed the club’s 300 assists! He may score the 800th goal of his career in the next game!

originate

On March 12th, Greater Paris narrowly beat brest 2-1. Messi assists, Mbappé is the winner. Mbappé scored his 201st goal in Paris in the last round of Ligue 1, becoming the top scorer in the history of the club. In this match, Mbappé scored the 3000th goal in the history of Barley, which was the 12th team in the history of Ligue 1 to achieve this feat, but ranked third in speed. They used 1858 games, only Marseille in 1801 and saint etienne in 1816 were faster.

It was only in the past ten years that Greater Paris became a giant, because Qatar spent nearly 1.5 billion euros to build Greater Paris into a strong team with Champions League competitiveness, and signed Messi, Neymar, Mbappé and Ramos. Especially the Big Three. However, this group may disintegrate next year, and even after the end of this season, Messi may leave the team.

Messi is negotiating a contract renewal with Greater Paris at this stage. He wants to stay in the five major leagues for at least one year, so that he can continue to attack the Champions League and compete for the America’s Cup at the national team level. Although the cooperation with Grand Paris may end, Messi will take the game seriously as long as he plays, and his role is almost the most obvious in Grand Paris. Only Messi can liberate Mbappé.

In this game, Mbappé ran a fast race and didn’t have many chances, but Messi’s quick delivery finally made Mbappé’s speed work. He also got the round of one-on-one challenge against the goalkeeper and won it smoothly. This is Messi’s 300th assist, all in the five major leagues, and they are all giants, Barcelona and Greater Paris.

Mbappé has worked with Messi for less than two seasons, but the number of assists he has received from Messi has already hit the top three of all Messi teammates. Mbappé wants all kinds of golden boots, and it is obviously the most beneficial to cooperate with Messi. Messi likes to assist his teammates. Suarez said earlier that Messi and Neymar had helped them compete with Cristiano Ronaldo for the European Golden Boot. They would give way and make the ball.

Messi has assisted 300 clubs, with 269 assists in 672 games in Barcelona and 31 assists in 65 games in Greater Paris. In this round of Ligue 1, Messi didn’t score, but he basically scored one goal per game recently. If Messi scores another goal, he will be able to activate the milestone of 800 goals in his career, and maybe he will break the record in the next round of Ligue 1. Otherwise, this milestone will be activated as a player of Argentina’s national team. After Ligue 1, Argentina will have two warm-up matches.

Five self-driving companies fell in half a year, and the American technology circle ushered in a "change"?

Globally, the United States has been in the leading position in artificial intelligence technology and application for a long time, and the development of driverless driving originated in Europe and America earlier. However, judging from the development of self-driving enterprises in the United States in recent years, layoffs and resignations have occurred frequently, and even more self-driving enterprises have gone bankrupt.

According to the new strategic low-speed driverless industry research institute, in the past six months, many companies in the American driverless industry have announced a large number of layoffs and filed for bankruptcy, including Ibeo, the originator of lidar, and Embark Trucks, a self-driving truck company.

Embark Trucks: from valuation of 5 billion to near bankruptcy only16 months

Embark was founded in 2016 and headquartered in San Francisco. It was co-founded by Alex Rodrigues and Brandon Moak. It is understood that Embark’s initial plan is to build a self-driving shuttle bus for university campuses. Soon after, the team began to turn to self-driving trucks applied to highways.

The market for self-driving trucks has broad prospects, and Embark has quickly gained the favor of venture capitalists. During the period from 2016 to 2019, Embark raised funds including: 2 million USD seed round financing led by Maven Ventures, 15 million USD Series A financing led by DCVC, 30 million USD Series B financing led by Sequoia Capital and 70 million USD Series C financing led by Tiger Global.

In 2021, Embark began to seek listing. In the transaction report, Embark said that they plan to realize the operation of self-driving trucks in the southern industrial zone of the United States from 2024. In November 2021, Embark successfully went public with a market value of about 5 billion US dollars and raised about 614 million US dollars.

However, this 7-year-old enterprise has never really started its business, let alone generate revenue or make profits. According to foreign media reports, Embark has announced the end of its operations and laid off most of its employees.

Locomation: nearly 70% of employees have been laid off, and the product has not been put into commercial operation so far.

Locomation originated from the National Robotics Engineering Center of Carnegie Mellon University and was founded by five co-founders in 2018. Locomation is developing the formation driving technology of heavy trucks-ARC Autonomous Relay Convoy. This technology allows a pilot truck with a safe driver to lead a second truck, in which the driver can rest or do other work.

In February this year, Locomation failed to raise additional investment capital, which led to layoffs. It was once rumored by the outside world as bankruptcy. In this regard, Locomation denied the media reports about the closure, but admitted that it did fire most non-engineering personnel. It is not clear how many of the 122 employees estimated by Locomation are facing dismissal. It is reported that there are 80 employees, most of whom left on February 24th.

?etin Meri?li, co-founder and CEO of Locomation, said that the company will not close down, but it has indeed reduced most non-engineering personnel in the face of economic headwinds.

For the development of the company, Finch Fulton, vice president of policy and strategy of Locomation, said that the company had recruited many smart employees, practiced strategic solutions and had some customers, but now the result was that it could not raise funds and the products could not be put into commercial operation, so it was necessary to make this decision.

Quanergy: Apply for bankruptcy liquidation and seek maximum value.

Founded in 2012, Quanergy is one of the first companies to develop LiDAR devices for automotive applications. Quanergy’s mission is to create intelligent LIDAR solutions with powerful functions and reasonable prices for Internet of Things applications to improve people’s experience and safety.

In December, 2022, Quanergy announced that the company started the orderly sale procedure of its business. In order to promote the sale and maximize the value, the Company applied for protection to the United States Delaware District Bankruptcy Court ("Bankruptcy Court") in accordance with Chapter 11 of the United States Bankruptcy Law ("Bankruptcy Law"), and intended to sell its business in accordance with Article 363 of the Bankruptcy Law.

Before submitting the bankruptcy protection case, the board of directors and management evaluated a series of strategic options to maximize the value of all stakeholders. The company also reduced operating expenses and settled a major patent lawsuit with Velodyne. Now with the protection provided by Bankruptcy Law, the company intends to expand its marketing efforts of potential buyers interested in specific business sectors or assets, and continue to seek continuous sales of business.

The company is expected to continue to operate in the process of bankruptcy protection and seek to complete the rapid sales process with the approval of the bankruptcy court. In order to help fund and protect its operation, Quanergy intends to use the available cash on hand and normal operating cash flow to fund the operation and normal costs after the application.

Argo AI: layoffs are closed, and investors "start a new stove" to receive.

In October 2022, Ford officially issued a statement: Argo AI, an autonomous driving company jointly invested by Ford and Volkswagen, will be closed and dissolved, and employees and some parts will be accepted by Ford Motor Company and Volkswagen Company respectively.

Argo AI, an autonomous driving technology company from Google’s unmanned vehicle team, was founded in 2016. A year later, it was acquired by Ford for $1 billion and introduced Volkswagen investment in 2019. Argo AI, backed by two mountains, can be said to be a star unicorn in the field of autonomous driving.

The valuation was as high as $7 billion, and the accumulated gold was $2.6 billion. The team size was as high as 2,000. However, it is such a potential L4 autonomous driving technology company that layoffs and bankruptcies are reported, and after the dissolution, Argo AI’s employees and technology may be taken over by Ford and Volkswagen.

So far, Ford has indeed done this. Just last week, Ford Motor announced that it would set up Latitude AI, a wholly-owned subsidiary, and recruited hundreds of employees for the new company from Argo AI, which it had previously invested in.

According to Ford’s statement, Latitude, located in Pittsburgh, the former home of Argo, will focus on developing an autonomous driving system for millions of cars with hands-free, eyes-off-the-road. Ford said that Latitude will focus on the automatic technology to assist human drivers in the short term, and will not completely replace human drivers with autonomous driving technology.

Velodyne: completely quit after merging with Ouster.

In 2016, Velodyne Lidar Division was separated from Velodyne Acoustics Company and became an independent company. In order to get the goods first, Ford and Baidu jointly invested $150 million in Velodyne Lidar Company. Since then, Velodyne has successively signed contracts with companies such as Mercedes-Benz, Hyundai Mobius and Boston Dynamics.

In the laser radar industry, Velodyne has always been in the leading position as the first laser radar in the world, and considers itself as "the only laser radar enterprise that can mass-produce wire harnesses." As more and more enterprises poured into the laser radar track, the products became more and more involved, and Velodyne began to decline.

Especially in terms of price. Velodyne mainly sells 64-line, 32-line and 16-line lidar, and the official prices are $80,000, $40,000 and $8,000 respectively. Because Velodyne chose the agency mode in the sales mode, the buyer also needs to pay taxes and agents’ service fees. As a result, the final price of a Velodyne64 line lidar was nearly 700,000 yuan. Other companies can provide more cost-effective products.

In February this year, Velodyne said that it had successfully completed the "merger of equals" with the lidar manufacturer OUSTer, which took effect on February 10th. The company will keep the name of Ouster and continue to trade under the company’s stock code "Oust". This also means that Velodyne has launched the historical stage.

tag

Although autonomous driving start-ups have fallen in the past six months, the exploration of autonomous driving in the United States is still overweight. According to foreign media reports, the American Autonomous Vehicle Industry Association released a policy framework on March 1, outlining the key priorities of federal autonomous vehicle legislation and supervision.

The framework proposed by the association includes several suggestions to the US Congress and the Department of Transportation to guide the federal government to take corresponding actions and promote the deployment and commercialization of self-driving cars in the United States.

Jeff Farrah, executive director of the association, said: "Self-driving cars are being tested and operated in various States in the United States, taking passengers and goods where they need to go. The United States is currently the leader in autonomous driving technology, but other countries are also making rapid progress. " Members of the association include Aurora, Cruise, Ford, Volkswagen, Waymo and Zoox.

With the continuous promotion of China’s position in the global political arena, China’s scientific research achievements in the field of autonomous driving have attracted worldwide attention. In 2023, China and the United States will have a new round of science and technology competition, and the industry will wait and see.

Musk: I was interested in cryptocurrency, but now I love AI.

On March 6th, Tesla CEO Elon Musk said on social media: I used to be interested in encrypting digital currency, but now I love artificial intelligence. "

According to media reports, Musk recently contacted artificial intelligence researchers and planned to set up a new research laboratory to develop a substitute for ChatGPT.

In addition, Musk’s remarks also suggest that compared with ChatGPT and Microsoft’s new chat bots, the chat bots he developed may have fewer restrictions on controversial topics.

Visualization of AI generation in novel description

Visualization of AI generation in novel description

In the article promoting Salt Town, Liu Shen Lei Lei mentioned a scenery description in the novel, and one of them described "the sun soaked in salt".

What should the "salt-soaked sun" look like? My imagination still seems to be insufficient. Now many foreign artificial intelligence software can draw pictures. As long as you give it a few key words, you can generate a new picture. If I make an analogy, what will be the images of "the sun stained with honey" and "the sun fried with peanuts"? Some novels describe landscapes unimaginable to human beings, and robots may be able to draw them now. The visualization of language description seems to be really visually impactful.

With the further development of artificial intelligence, it should be possible to achieve "reverse visualization", that is, to deliver paintings such as Picasso, Van Gogh and Monet to robots, and the software can reversely generate a set of descriptive words related to the content and style of paintings.

Now there are a variety of AI image generators that use artificial intelligence algorithms to convert text into images. Enter a text prompt or description, and these AI tools can quickly turn your ideas or concepts into visual representations, that is, pictures, in a few seconds. This tool is based on deep learning algorithm, which has been trained on large image data sets and their corresponding descriptions.

With the rapid expansion of training scale and continuous enrichment of experience, this kind of AI image generator will become smarter, more flexible and creative. But at present, this kind of AI generator has only one-way operation, that is, "from text to image". I believe that someone will design the reverse operation in the future, that is, "from image to text". At that time, we can use software to generate a paragraph on a picture, reflecting the content and style of the work. This is actually the beginning of AI’s analysis of art works, even though it may be a bit rough at the beginning.

In addition to the two-way generation of "text images", when the video analysis is deep enough and the video data set is large enough, we can expect the two-way generation of "text video" through training, that is, by writing a prompt text, AI can generate a related video according to the content and style of the prompt; On the contrary, a paragraph of text can be generated according to a video to explain its content and style. The former will create a precedent for automatically generating videos only by using spoken language or words, while the latter is the starting point for analyzing and commenting on film and television AI. If one day, people can automatically generate a film and television work by inputting the script of film literature in spoken or written language, will it still be incredible?

The generation of two-dimensional "text to image" has been realized, can the generation of three-dimensional "text to video" be far behind? If I had the "text-video" generation software, maybe I would be interested in "producing" small movies, such as Guan Gong vs. Qin Qiong, Marx entering the Confucius Temple, or even more imaginative stories, which are more likely to be very realistic works. And you?

Let’s assume that "words to images" are generated in the forward direction. What would it be like if a group of words ("images to words") generated in the reverse direction were used as hints and then a visual painting was generated in the forward direction? It’s like translating a Chinese poem into English and then translating this English poem back into Chinese. In a few cycles, with the continuous wear and gain of language elements and cultural elements unconsciously, the result of "passing the password" game will be unrecognizable.

Imagine again, using artificial intelligence software, forward generation and reverse generation are repeated, and even shocks are formed. Compared with the initial input, the final output is definitely unrecognizable, but the process of "transcription" and "translation" is still well documented and traceable.

The result of repeated superposition of this forward and reverse generation has undergone a qualitative change and has become a new existence, perhaps a kind of "emergence". Although it is trivial, it is complicated with wear and gain, and you can’t understand its ins and outs. In contrast, the language of modernist writers and painters only experienced low-frequency shocks. Perhaps a sense of innovation was unexpectedly born in it. From the modernist creation such as misty poetry, we can make this inference.

With the appearance of cameras, "image" is no longer unusual, and the value of realistic paintings has plummeted. Modernist and postmodern heroes of all walks of life have found their way out of "unlike", claiming that it is more like in essence. AI (Artificial Intelligence) is rampant. What will our future works of words, images and videos look like?

Thoughts from the novel Salt Town recommended by Liu Shen Lei Lei

Twenty million two hundred and thirty thousand three hundred and six

From 9-0 to 0-1! Salah lost, Liverpool lost to the relegation team and humiliated Manchester United by hook or by crook!

Liverpool, the Premier League giants, did not perform well at the beginning of this season, which once made fans fail to see their hope of winning the Champions League next season. Fortunately, with the deepening of the season, Liverpool gradually came out of the trough. In the last round, the Double Red Club wiped out the old enemy Manchester United 7-0 at home, climbing to the fifth place in the standings, only three points behind Tottenham in one game less, which can be said to have grasped the initiative in the battle for World War IV! This week, the 27th round of the Premier League started in full swing, and Liverpool went to the away game to challenge Bournemouth. During the first leg of the season, Liverpool beat their opponents 9-0 at home, which is why fans thought they could easily kill each other.After the game started, Liverpool took advantage of the scene, but it was too late to score. On the contrary, Bournemouth took advantage of a few offensive opportunities and Billing scored the only goal in the game. In the end, Bournemouth avenged Liverpool 1-0!

Although the opponent’s strength is not very strong, klopp didn’t let his guard down. After all, the Premier League has always been unpopular. In this game, Uncle Zha followed the 4-3-3 system, allison continued to play for the team, and Arnold, konate, Fan Dike and Robertson formed a four-back, Elliot, Fabinho and bajcetic sat in the midfield, while the front trident was Salah and Gack who scored twice in the last game.

After the referee blew the whistle at the start of the game, Liverpool kept the ball firmly under their feet and stormed towards Bournemouth’s goal. In the 5th minute, Liverpool got a right corner kick, Arnold drove the ball to the back, Fan Dike jumped high and headed the ball to the far corner. Bournemouth’s goalkeeper had given up, but it was cleared by the defender’s goal line.

However, two minutes later, Arnold made a fatal mistake in the midfield, which made Bournemouth hit back conveniently. Ouattara got a single-handed goal. Unfortunately, after passing allison, Liverpool missed the goal at a small angle, and scored in the 13th minute. Gakpo scored the goal after receiving the ball from his teammate in the restricted area. The referee immediately signaled offside and cancelled the goal.

In the 23rd minute, Bournemouth hit back again.Ouattara succeeded in offside on the right. After receiving a long pass from his teammates, he directly killed Liverpool’s restricted area. Then he made a cross from the bottom and found Billing who followed in the middle. The latter easily pushed and scored meritorious deeds, helping Bournemouth to take a 1-0 lead.

In the second half, klopp changed into jota, and wanted to continue to strengthen the team’s attack. In the 49th minute, Jota finished a threatening shot at the front of the restricted area, but the Bournemouth goalkeeper was in a very brave state, and he managed to save the ball to keep the goal clean.

In the 67th minute, Milner sent a cross from the right, and jota headed the goal. The ball hit the open arm of the Bournemouth defender and went out of the bottom line. After returning to VAR, the referee gave Liverpool a penalty.Unfortunately, Salah’s penalty kick was wide of the mark, and Bournemouth was lucky to hold the lead by one goal.

In the next 30 minutes, the backward Liverpool put a lot of pressure on Bournemouth’s goal, but it bombed indiscriminately and failed to knock on the opponent’s goal. In the end, it could only accept the result of losing 0-1. The Double Red Club bloodied Manchester United with 7 goals, and then lost to the relegation team 0-1. This is also the reason why many fans ridiculed after the game. Liverpool did everything to humiliate the old enemy Manchester United!

Will artificial intelligence replace human beings at present, and there will be real intelligence?

Give the answer first: at present, artificial intelligence can’t replace human beings at all, nor can it have real wisdom.

There are 100 billion neurons in the human brain. Why do humans have wisdom? According to scientists’ conclusion, because the number of neurons in human brain has exceeded a limit, wisdom has emerged.

ChatGPT has 175 billion neuron parameters. If you keep training, will ChatGPT have real wisdom? Or continue to increase the parameters, for example, to 1 trillion parameters, will ChatGPT have real wisdom?

The answer is no.

Although ChatGPT, an artificial intelligence based on convolutional neural network, simulates human neurons in its underlying mechanism, one parameter is equivalent to a human neuron. But the way of computer neuron simulation is very simple and crude. In addition to the values of 0 and 1, there are also phase values between human neurons. The neurons simulated by computer have only the values of 0 and 1, but there is no phase value. The amount of data transmitted by computer neurons may be only one tenth to one percent of that transmitted by human neurons.

Moreover, the links between human neurons are very complicated. Apart from horizontal links, there are vertical links. There are 100 billion neurons in human beings, each neuron has more than 1,000 outward links, and there are about 100 trillion synapse between nerves. Computer simulation of the links between neurons is extremely simple, only related to the upper and lower levels.

Therefore, even if the current artificial intelligence based on convolutional neural network increases neuron parameters, it is impossible to change from quantitative change to qualitative change and become truly intelligent.

Neural network of human brain;

Computer convolutional neural network;

The above is from a micro point of view, from a macro point of view, ChatGPT is so powerful, in fact, TA only has inductive ability, but does not have reasoning ability. That is to say, in theory, TA doesn’t understand what we humans train TA for. TA just remembers the law, knows the problem, and that’s how to solve it, and doesn’t understand what the problem is.

For example, artificial intelligence can’t understand the real addition and mathematical rules. For example, we taught ChatGPT to add, subtract, multiply and divide, but gave ChatGPT four numbers, (1, 1, 6, 2), and let ChatGPT use these four numbers to calculate 24 points by adding, subtracting, multiplying and dividing. Humans have the ability of reasoning, and can calculate 6*(2+1+1)=24, but if ChatGPT has not been taught the calculation method of 24 points and knows addition, subtraction, multiplication and division, TA can’t be calculated.

That is to say, what ChatGPT has been taught can be summarized by TA, and what has not been taught, ChatGPT can’t deduce what it has learned from what it has already learned.

ChatGPT has no reasoning ability, only inductive ability, unlike human beings, which have both inductive ability and reasoning ability, so human beings are creative and have real wisdom.

On the surface, artificial intelligence such as ChatGPT is creative. For example, you can draw a picture with Stable Diffusion, but this is actually not creation, but Stable Diffusion recombines the templates that have been trained to draw countless times before through different patterns.

As far as inductive ability is concerned, ChatGPT can be trained by the whole human knowledge base, and it can become the strongest inductive ability in history, but its reasoning ability is 0.

Personally, I think that as long as human beings exist, they will certainly pursue eternal life tirelessly. There are two ways to pursue immortality. One way is the great development of artificial intelligence. In the future, artificial intelligence will have real wisdom to replace human beings, or human consciousness will be uploaded to computers and become virtual life. In any case, silicon-based life has replaced carbon-based life and become eternal life.

The other way is that we human beings are carbon-based life, and the cells of our bodies can be infinitely updated, so naturally we can live forever. At present, the maximum life span of human beings is 120 years old, which is over 70 years old. Due to the excessive division of human body cells, mistakes will occur and they will become cancer cells. Cancer cells are not controlled by the immune system, encroaching on the resources of the whole body, and eventually human beings will die of systemic failure. In addition, many human cells can’t divide, or divide for a limited number of times, such as nerve cells, which are difficult to update. When people are old, brain cells are necrotic and can’t be updated, Alzheimer’s disease will occur, heart cells can’t be updated, and eventually they will die of heart failure.

In nature, there are many lives and cells that can divide and reproduce indefinitely without making mistakes. For example, HeLa cells have been cultivated for countless generations and are still full of reproductive ability. HeLa cells have actually achieved eternal life. Lighthouse jellyfish, for example, is a reversible creature. If the deep-sea environment of the earth does not change much, and there are accidents, lighthouse jellyfish can live forever and never die.

Why can’t human beings live forever on the basis of carbon-based life? Because the human body is too complicated. Mankind has made great progress in information technology, but it has made slow progress in biotechnology.

Anyway, from the current point of view, no matter which eternal life path, whether artificial intelligence replaces human beings, silicon-based life replaces carbon-based life, or human beings themselves can be infinitely updated and become eternal carbon-based life with wisdom. The goals of these two roads are very, very far away.

Ji Lianying, co-founder of Muniu Technology, was hired as an intelligent networked automobile expert of China-Europe Association.

[Text/Song] Recently, Lin Shi, Secretary General of Intelligent Networked Automobile of China-Europe Association, met with Ji Lianying, co-founder of Mu Niu Technology, at the first private board meeting, and presented a letter of appointment to Ji Lianying to hire him as an intelligent networked automobile expert of China-Europe Association. Subsequently, the two sides exchanged in-depth technical information, development strategy and direction of Muniu Technology.

Ji Lianying received his Ph.D. from Beijing Institute of Technology in 2009. He has worked in embedded systems, human perception technology, artificial intelligence algorithms and other research institutions such as China Academy of Sciences and National University of Singapore. He used to be the technical director of Wuxi Microsense Technology Co., Ltd., and developed the first full-body motion capture system in China. Before he founded Mu Niu, he worked in the University of Chinese Academy of Sciences. Muniu Technology was established in Kansas, USA in May 2015, and the Beijing R&D Center was also established in July. The name of Muniu Technology comes from Muniu Liuma —— The Muniu Liuma invented by Zhuge Liang and others in the Three Kingdoms period is the embryonic form of intelligent transportation. The Muniu team took this as an incentive, embraced the intelligent era and paid tribute to China’s intelligence and wisdom. Muniu Technology has a combination of Chinese and American genes, which has avoided the early exploration stage and directly started to exert its strength in the field of 77GHz. It has been at the forefront of technology from the beginning and has achieved fruitful results.

Ji Lianying said at the meeting: "The core founding teams of the company all graduated from the Institute of Radar Technology of Beijing Institute of Technology and have more than ten years of research and project experience in radar. The company has been focusing on the research and development of millimeter-wave radar, and has profound technology and research and development strength in broadband antenna, MIMO and signal processing, and is at the leading level in the industry. Has more than ten patents. "

Ji Lianying said: "In the radar market, the passenger car market is the largest, but this market is basically monopolized by international manufacturers. In the field of commercial vehicles, foreign manufacturers disdain to do it, which is a very big blue ocean for domestic manufacturers. In the field of autonomous driving, the demand for driverless technology is relatively high. "

Lin Shi, secretary-general of China-Europe Association Intelligent Networked Automobile, pointed out that as a startup company, Muniu Technology has faster and stronger execution compared with large companies, so it may be faster than large companies in the innovation and creation of the same new technology. The popularization of ADAS to low-end cars requires higher cost performance and more practical technical support. China has both the largest manufacturer of low-end cars and a highly innovative enterprise like Muniu Technology, which is the weight to participate in the global automobile market competition.

Lin Shi, Secretary-General of China-Europe Association Intelligent Networked Automobile, is looking forward to the future development of Muniu Technology. He hopes that Muniu Technology will "escort" China’s new energy automobile industry by relying on the "hard power" of its own enterprises in the intelligent era.

Awesome! Shenzhen University pioneered: "Virtual Digital Man" micro-specialty!

Development of 5G, AI, VR and other technologies

The "Virtual Digital Man" industry is booming.

What? Virtual digital people?

I’ve heard of virtual numbers, I’ve heard of people,

Why can’t you understand this when you combine it?

Don’t worry!

This is not a word mashup, but the first micro-specialty in Shenzhen University.

-"Virtual Digital Man"

With the advent of the "Meta-Universe" era, virtual human technology has gradually become one of the hottest industrial tracks.Recently, Shenzhen Datong has approved seven micro-professional construction projects, including "Virtual Digital Man", "Edge Computing and Internet of Things Communication".Among them, the micro-specialty of "Virtual Digital Man" is a school-enterprise cooperation between Communication College and Tencent Technology (Shenzhen) Co., Ltd., which is committed to breaking down the barriers between colleges and disciplines and cooperating to cultivate compound innovative talents in short supply in the market.

"Virtual Digital Man"

"Virtual digital human" is a visible, interactive and adjustable virtual human form that digitizes the human body structure and presents it on the terminal screen through computer technology. As a hot field at present, it is the intersection of computer, digital media, marketing and other disciplines, and it is also the new direction of head Internet companies such as Tencent, Baidu and Iflytek.

From a technical point of view, virtual digital people can be divided into two categories: human-driven and intelligent-driven. Reality-driven is a relatively mature field at present, represented by Tianyi Luo, Liu Yexi, Xing Pupil and AYAYI in the industry. At present, the related concepts of "Metauniverse" form a clustering effect, and virtual digital human is one of the core industrial links of Metauniverse. With the lowering of the threshold and the great richness of application scenarios, it is estimated that by 2025, the virtual digital human industry will reach 100 billion.

In March, 2021, the state incorporated the development of virtual digital technology into the 14th Five-Year Plan for National Economic and Social Development of People’s Republic of China (PRC) and the Outline of Long-term Goals in 2035. Realizing virtual digital technology innovation has become the only way for China to realize industrial innovation and technological power in the future. In this context, Shenzhen University grasped the reality and focused on the future, and took the lead in creating a "virtual digital person" micro-specialty.

Based on the school-enterprise cooperation between the School of Communication of Shenzhen University and Tencent Technology (Shenzhen) Co., Ltd., the micro-specialty of "Virtual Digital Man" aims to meet the audio-visual content needs of the meta-universe and artificial intelligence, and cultivate the production methods and principles of virtual digital man. Master the planning, creation and communication skills of virtual IP; Know some artificial intelligence, algorithms and knowledge mapping principles, and be able to integrate code language with images; Establish professional innovative talents with professional understanding and research ability in the meta-universe industry, so as to get through the "last mile" of professional education and professional needs.

Micro-courses will adopt the training mode of "course cooperation+project cooperation +“IP co-creation", and 40 students will be recruited for computer, media art, news communication and other majors every academic year, and eight courses including Digital Man Making, Metauniverse and Media Philosophy, Introduction to Artificial Intelligence and Virtual IP Operation will be completed during the training period.

Laboratory scene

It is reported that half of the course content of the micro-major will be taught by the CDD team of Tencent Content Ecology Department in person, and students will have the opportunity to enter the actual combat of the enterprise in the name of project internship. In the teaching process, we also arranged visits and exchanges to broaden students’ horizons, hoping to transport innovative and creative talents that meet the needs of the intelligent audio-visual industry.

Wang Jianlei, the head of the micro-specialty of "virtual digital people" and an associate professor at the School of Communication, Shenzhen University, said that "virtual digital people" is a very cutting-edge professional direction, and Internet technology companies are groping for stones and finding ways to seize the new blue ocean of the market. In the education field, China Communication University, Nanjing University and South China University of Technology have all started to offer relevant virtual production courses consciously. The cooperation between Shenzhen University and Tencent Technology Co., Ltd. has inherent advantages. If we can take the lead in setting up a micro-specialty based on this, it will be the first virtual digital professional in China. With distinctive features and high focus, it will form a good professional recognition. The adoption of micro-professional training can deliver the shortage of talents to the market in the shortest time, which will certainly benefit all parties in Industry-University-Research.

Complete the world’s leading CT-FFR clinical research with Keya Medical’s independent artificial intelligence technology products: release of ACC.23 TARGET clinical research results.

In the early morning of March 5, Beijing time, the team of Professor Chen Yundai from the Cardiology Center of the General Hospital of the Chinese People’s Liberation Army published a special report on clinical research at the American Heart Association/World Cardiology (ACC/WCC)2023 Conference. Based on the artificial intelligence CT-FFR technology, the clinical research report on the treatment and follow-up of patients with stable coronary heart disease -TARGET trial, the results will be published simultaneously in the top international journals.CirculationJournal (TOP journal in JCR 1 area, impact factor 39.9).

The research was supported by the National Key R&D Program and the Beijing Science and Technology Rising Star Program, and combined with the research teams of cardiology and radiological imaging departments of several top domestic third-class first-class hospitals such as Beijing anzhen hospital affiliated to Capital Medical University, the Second Affiliated Hospital of Zhejiang University Medical College, Qilu Hospital of Shandong University, the First Affiliated Hospital of Xinjiang Medical University, and tongji hospital affiliated to Tongji Medical College of Huazhong University of Science and Technology. The correspondent of this research paper is Professor Chen Yundai, Associate Professor Yang Junjie, Deputy Chief Physician Shan Dongkai and Dr. Wang Xi from the Department of Cardiovascular Medicine of PLA General Hospital are the co-first authors of this paper.

01

Introduction to research

TARGET study is the first multi-center, randomized and controlled clinical study in the world to evaluate the treatment and management of new stable chest pain patients using the field deployment strategy based on machine learning CT-FFR calculation. The research uses the artificial intelligence CT-FFR computing technology independently developed by China (Keya Medical Technology Co., Ltd.), and a total of 1216 patients from six medical centers in China were selected. The pretest probability of obstructive coronary heart disease in the enrolled patients was medium to high, and coronary CT angiography suggested that there was a critical stenosis of 30%-90%. The researchers randomly divided patients into CT-FFR diagnosis and treatment group (experimental group) or standard diagnosis and treatment group (control group). The main end point of the study was the proportion of patients with non-obstructive coronary artery disease or with obstructive coronary artery disease who did not receive revascularization during the follow-up coronary angiography within 90 days. Secondary end points included major adverse cardiovascular events, quality of life outcomes, improvement of angina symptoms and medical costs.

The results showed that compared with the control group, the proportion of patients with non-obstructive coronary disease or obstructive coronary disease who did not receive revascularization in CT-FFR diagnosis and treatment group decreased significantly (28.3% vs. 46.2%, P<0.001). On the whole, there were more patients receiving revascularization in CT-FFR group than in the control group (49.7% vs. 42.8%, P=0.02), but there was no significant difference in the proportion of MACE during the one-year follow-up (hazard ratio, 0.88; 95%CI, 0.59 to 1.30)。 During the follow-up period, the quality of life and symptoms of the two groups were similar, while the medical cost of CT-FFR treatment group tended to decrease. It is concluded that, compared with the standard diagnosis and treatment strategy represented by cardiac stress examination, the on-site deployment of CT-FFR calculation diagnosis and treatment strategy based on machine learning will significantly reduce the proportion of patients who have found non-obstructive coronary artery disease or do not need intervention within 90 days after coronary angiography. In addition, CT-FFR diagnosis and treatment strategy tends to save medical costs and increase the proportion of revascularization in the selected population. At the same time, CT-FFR strategy is consistent with the traditional path in improving patients’ symptoms or quality of life and the incidence of major clinical adverse vascular events.

02

Research enlightenment

TARGET research results show that "CT-FFR strategy based on machine learning in the field is feasible, safe and effective".

In the past 10 years, the extensive use of coronary CTA has promoted the diagnosis and treatment process of coronary heart disease in China. According to statistics, in 2017, the total number of coronary CTA angiography examinations in China reached 4.6 million. Therefore, the simple diagnostic function of coronary angiography is weakening, but among the patients who have received coronary angiography in China, most cases have not found obstructive coronary stenosis in the catheter room. Part of the reason for this phenomenon is that functional examination is not widely used or advanced cardiac imaging technology is not available enough. TARGET study further emphasizes that coronary angiography should only be applied to those patients who are most likely to have obstructive coronary stenosis or benefit from revascularization, and CT-FFR strategy will significantly optimize the management of stable coronary heart disease population.

This study adopts the CT-FFR simulation calculation technology-deep pulse fraction independently developed by Keya Medical Technology Co., Ltd., uses deep learning technology to evaluate the physiological function of coronary artery, and uses artificial intelligence technology to evaluate the FFR of coronary angiography image, which is a deep learning technology independently developed and optimized based on the latest development in the field of computer vision. It can quickly and accurately analyze the non-invasive blood flow reserve fraction. In January 2020, this technology was approved as the first NMPA artificial intelligence medical device class III certificate in China, and now it has become the only CT-FFR product in the world that has been triple-certified by NMPA in China, CE in the European Union and FDA in the United States.

03

Pamela Douglas, MD (Duke Clinical Research Institute, Durham, North Carolina), former president of ACC, led the experiment funded by HeartFlow (PLATFORM and PRECISE research). She pointed out that the most outstanding thing about the TARGET experiment is the novelty of its on-site CT-FFR analysis.

Douglas said:

It is indeed possible that the field deployment method is cheaper and can return the results faster. In clinical practice, if CCTA is used as a first-line test, Douglas said, then the question becomes: "If you have borderline lesions, what should you do next?" For her, "this is a little obvious, because CT-FFR is only a software analysis. Although the previous products are very complicated, it is not without risk to make an appointment for a load test and ask the patient to come back later."

The researchers pointed out:

It is very important to deploy artificial intelligence computing in the field of TARGET research. "The advantage of using artificial intelligence algorithm is that it provides the possibility of field deployment, avoids the need to transfer sensitive medical data, shortens the calculation time and increases the participation of clinicians." They explained that although FFR can also be calculated by field computational fluid dynamics, this strategy is complex and requires a lot of resources. The convenience of machine learning will contribute to the application of CT-FFR in a wider range of scenarios, adding that "on-site CT-FFR strategy is practical and may be more suitable to meet the clinical practice needs in various clinical environments."

AI artificial intelligence algorithm+data analysis platform for fall behavior recognition algorithm

Based on ToF depth data, the AI algorithm expert model data deep learning network can complete the accurate recognition of human postures such as falling, sitting, lying and walking. Intelligent ai behavior recognition monitoring can monitor the abnormal behavior of personnel online, deeply analyze the obtained video information according to the AI behavior detection algorithm, and trigger alarm information immediately when the behavior of personnel does not meet the requirements of the rules. Automatic recognition of employees’ actions and behaviors in relevant work areas can realize fatigue recognition and departure recognition in actual scenes.

Different from wearable monitoring equipment and camera monitoring equipment, the company’s smart pension facilities provide non-touch monitoring for the elderly, which is more convenient to use without the elderly actively cooperating with wearing and charging the facilities regularly. The escalator personnel fall detection algorithm can accurately identify the behaviors of luggage case, trolleys, strollers and their personnel on the escalator, such as falling, queue density, retrograde, etc., and help managers find dangerous situations at the first time. The development of sensors makes objects have touch, smell, vision, taste and hearing similar to human beings, and makes objects come alive, which is an important means for intelligent products to interact with the outside world.

Smart power series algorithms realize intelligent monitoring, real-time analysis and hidden danger alarm for personnel unsafe behaviors, environmental risks, facilities and other parameters, and create an innovative mode of power safety production. Equipped with edge gateway box, embedded with AI Eye-in-the-Eye algorithm, it can also provide early warning information for intelligent and diverse emergencies, such as campus personnel gathering, personnel falling identification, dangerous area intrusion identification, high climbing identification, etc., and protect students’ personal safety in school.

Analyze people’s numerous behaviors and combine them with relevant fixed scenes to obtain user data analysis, such as decomposing the actual operation links of workers, establishing standardized data rules in terms of time and action standards, setting up multi-scene coverage video surveillance system at the entrance and exit of escalators and taking the escalator area, and analyzing the artificial intelligence algorithm with "smart brain". It can quickly and accurately detect violations such as passengers falling down, the escalator on the stroller, the detention at the entrance and exit of the escalator, passengers retrograde, and the body reaching out of the escalator, and prevent them as early as possible by means of voice broadcast and sending early warning information to the management unit, effectively preventing accidents caused by people’s unsafe hidden dangers.

The recognition technology of body movement behavior in artificial intelligence image processing algorithm also plays an important role in intelligent monitoring. In the process of visual tracking, the target tracking is realized according to the correlation algorithm of adjacent frames or through the global correlation algorithm of quality inspection of all frames. Real-time tracking of body movements by video has high efficiency and low cost.

Personal behavior analysis mainly includes human face recognition, staff behavior recognition, range intrusion detection, inspection of items left behind, crowd gathering recognition, intelligent tracking and so on. Relying on the sensor built in the helmet and the field camera, it can automatically identify the behavior of people falling in the construction site, rescue them as early as possible, improve the effect of manual supervision and ensure life safety. Personnel fall to the identification alarm system. The personnel fall detection system reduces the risk of on-site personnel monitoring and improves management efficiency.

The fall detection system effectively remedies the shortcomings of traditional methods and technologies, improves the effect of manual supervision and ensures life safety. Fall into the detection and identification system for humanized care and intelligent operation management, reducing costs, reducing risks and improving management efficiency for the elderly. The identification of AI video content is mainly aimed at the automatic identification, classification and prediction of the behavior of the target characters in the picture. According to the motion information captured by the camera, the human posture and motion trajectory are calculated according to the information data by the algorithm.