Monday, August 29, 2016

Chinese made cars if production of 300 000 cars chips cost around 6000 New think

Chinese-made cars: if production of 300,000 cars, chips cost around 6000 | New think tank driving

On August 25 at the Shanghai New International Expo Center, CEO of Chinese solar-powered car Division Gao Weimin on the MMC for prospectively Hina's solar car project road show, the 20-minute speech, he spoke for 30 minutes. Much of which is in interpreting the Chinese feasibility of solar thin-film chips used in automobiles.

Gao Weimin said after the speech ended, Han had to car makers in the early promotion of solar cells, "and they (automakers) said there is hope for solar cells, you use, we provide chip, none successful. "To this end, this becomes an introduction of the Chinese-made car. Han could build cars through auto industry believes that thin-film solar chips to power the car, let it on the road.

Early last month, Han released four solar car: Hanergy Solar o, Hanergy Solar l, Hanergy Hanergy Solar and Solar a r. Then a question.

Moschino iPhone Cases

In fact, the Chinese can "build a dream" from the date of birth had been questioned. Partly because most people have never seen a car powered by solar energy, how many will doubt the feasibility of the commercial production; the other is last May, Chinese share prices were halved, billions in market capitalization, has shaken the faith of many people. Moschino iPhone 5

Gao Weimin also clearly perception of Chinese solar car challenge, after the speech, he told the media: many car industry is not too optimistic about solar energy, they don't know much about solar cells to one of today's State. Each solar, he thought (conversion rate) or 6% to 7%. About solar chips in cars, they are not convinced.

In the past few years, Chinese overseas four thin-film solar companies through mergers and acquisitions: Alta Devices, and Solibro, and MiaSol e and Global Solar Energy to accumulate thin film technology, and technology integration energy conversion rate from the industry's level of 15% to 18% to 31.6%. Is to a large extent, Chinese can build confidence that stems from confidence in thin film chip technology.

According to Han's plan, conversion rate of GaAs thin film solar cell technology will continue to improve, by 2020, conversion of 38% and 42%. Moschino iPhone Cases

Chinese-made cars: if production of 300,000 cars, chips cost around 6000 | New think tank driving

"Photovoltaic technology and vehicle technology are two totally different subjects, do people just want to do cars, photovoltaic people only think of film power, so Chinese solar-powered car will be combining these two is no easy task. "Gao Weimin has on a number of occasions that this view. He believes that solar cars are ultimately two interdisciplinary cars and solar energy innovation, very valuable. This is also in the SAIC and Beijing Automotive Research Institute, respectively President of the Gao Weimin joined Han can be one of the reasons.

In early July this year's event, Chinese cooperation with the Foton program is to use solar energy to provide the electricity required to meet the bus air conditioning. Now, before this is the Chinese build a solar car to the transition programme.

Chinese Senior Vice President Zhang bin can tell us, "chip, we can turn to a transition programme, does not mean that all of a sudden. Such as basic auxiliary power supply, and then slowly transition to full solar car programme. This is path problem. 」

Even though the volume production of solar cars also need to address some technical bottlenecks, but Zhang bin's statement also showed that Chinese could launch the first solar car, most likely in the embedded solar cells on the basis of lithium-ion batteries, which work with solar energy and battery, increase the mileage and reduce battery weight of pure electric vehicles, while meeting weight requirements of body. Gao Weimin said Han was contacted some auto makers, which tries to be sold to the latter.

Han can tend to sell solution is still built cars, Gao Weimin said both have the will, but prefer the latter. His nets from Lei Feng (search for "Lei feng's network", public interest) explained that through their own car, you can feel how the chips for the automotive industry to develop. This is because the level of solar cells with the original chips can vary in technology, equipment and raw materials. Automotive research and development of chip, is the ongoing work of the Chinese.

At this stage, Chinese solar-powered car that can also have a lot of work needs to be combed. Gao Weimin said the current path of the Chinese-made cars are still in progress:

Han will in accordance with the relevant provisions, establish an automotive company, apply for qualification to obtain the relevant Department. He said a wide range of cooperation in the future, such as acquisitions, set up a joint venture with a third party company on behalf of workers, and so on. Gao Weimin said at the current pace, Han's first solar-powered car will be available within three years. In terms of costs, he said if we can produce 300,000 vehicles, when the chip costs around 6000.

According to the Chinese planning, solar car there deliberately to charge fixed charge facilities, in conditions of good lighting conditions, parking charge in the Sun. In addition, fully solar-powered car equipped with conventional lithium batteries for energy storage, in wet weather or long distance travel needs, allows you to charge fixed charge facilities, maximum range of 350 km.

Similar to the Han dynasty and the early of BYD. 13 years ago, BYD has a battery technology and related patents, but there was no car manufacturer who is willing to focus on pure electric vehicles, BYD has to make cars. After more than 10 years of accumulation, the technology got started on a battery car manufacturers to this day. For Chinese with thin-film solar technology, this vehicle way too long. But the bigger challenge is that car needs a lot of money.

This new intelligence driving articles, welcome new intellectual drive. Micro letter added "a new intellectual drive", can subscribe to the public.

Monday, August 22, 2016

Deep learning under four big beast

Now more and more people want to talk to your own virtual personal assistant, let Siri/Alexa/Rokid help you only take your pencil to complete the probe, ticket booking, set the alarm clock to the business, meeting also can remind you to take the medicine, so one does not need to pay warm padded jacket can not liked about it? Virtual Assistants are closer to the reality of personal assistants, and behind it is the depth of learning technology. Other than the virtual assistant, advanced learning technology will also be the future of computer vision, automatic driving core technology, speech recognition, and other fields. Depth study and practice the four key elements: computation, algorithms, data, and application scenarios, as guardians of the four animals to ensure that the depth of learning and practice, are indispensable.

MCM case

Deep learning under four big beast

Depth is not less than two hidden-layer neural network to nonlinear input transformation or learning technology by building deep neural network analysis of the activities carried out. Deep neural network consists of one input, a number of hidden layers, and an output layer. Each layer has a number of neurons and connections between neurons the weights. Each neuron simulation of biological neurons and connections between nodes simulate the connections between nerve cells. Summed up like this:

Deep learning under four big beast

This flow chart is a special property of depth (depth): from an input to an output of the length of the longest path. Deep learning is not a new concept, but in 2006 by Hinton, who led a wave of outbreak. However in recent years, although a lot of people are talking about deep learning, but which is the practical application of this technology? Starting mature companies rely on deep learning what elements are needed? Here is our opinion.

| Computing power

First, the depth of complex neural network, data, and computing capacity. Depth of neurons in the neural network, number of connections between neurons are also pretty amazing. From a mathematical point of view, each neuron to include mathematical calculations (such as the Sigmoid, ReLU or Softmax function), the need to estimate the amount of parameters also have a great. Speech and image recognition application, neuron, tens of thousands of parameters of tens of millions of, model complexity leads to calculation. So computing power is the Foundation of the application deep learning.

Deep learning under four big beast

Not only so, calculation capacity also is promoted depth learning of weapon, calculation capacity more strong, also time within accumulated of experience on more more, and Diego generation speed also more fast, Baidu Chief Scientist Wu Enda Dr think depth learning of frontier are transfer to high performance calculation (HPC), this is he currently in Baidu of work gravity one of, Wu Dr think depth learning Shang of many success thanks Yu active to pursuit available of calculation capacity, 2011 Jeff Dean (Google Tensorflow one of the designers of the second generation of artificial intelligence learning system) founded and led Google depth study group, using Google's cloud extending deep learning; it allowed deep learning to push the industrial sector. In 2013, Dr Coates, who established the first deep learning of HPC systems, scalability, improved the 1-2 of magnitude, caused a revolutionary advance in deep learning--computing power of that deep learning support and role are irreplaceable.

Currently this aspects technology in leading status of also is like Baidu, and Google such of large Internet company, certainly also some like horizon robot such of start-up company in the field quite achievements, by Baidu depth learning Institute head Yu Kai Dr founded of horizon robot company design of depth neural network chip compared traditional of CPU chip can support depth neural network in the image, and voice, and text, and control, aspects of task and not to do all things, such than in CPU Shang with software to efficient, 2-3 increase in orders of magnitude.

Deep learning under four big beast

| Algorithm MCM case

In calculation capacity became increasingly cheap of today, depth learning tries to established big have more also complex have more of neural network, we can put algorithm understanding for depth learning of neural network or calculation thinking, this neural network more complex, capture to of signal on more precise, currently compared common of algorithm including depth faith network (Deep Belief Networks), and volume product neural network (ConvolutionalNeural Networks), and Restricted Boltzmann machine (Restricted BoltzmannMachine) and stacked automatically encoder (Stacked Auto-encoders), represented by the advanced Convolutional neural network learning method was by far the most used and most effective.

But currently problem is everyone put focused degrees put in has data and operation, because neural network itself differences not is big, and neural network of core algorithm upgrade up too difficult, still faced with like local optimal problem, and cost function and whole neural network system of design, problem, but this also to many venture company to new of thought, why opposite, avoid that contains with took of "bridge", if can will algorithm optimization, future is limitless of.

| Data

Deep learning is fast becoming a hot topic in the field of advanced data analysis, and the absolute amount of data is to facilitate deep learning tools and technological development one of the key factors. DanielMcDuff Affectiva, Chief Scientist and Research Director, emerging after the company has accumulated enough data, technology can play a better role. For those who the application deep learning and development needs a lot of time training, and improve not only in product promotion need more user data in real time, continuous iteration, update.

Deep learning under four big beast

Study in depth, China still has a good chance in the competition, availability of data over the Internet, as well as low-cost labour, China will bring vast amounts of data and very low cost of data tagging. But the problem is that large amounts of data for domestic market are Internet giants such as BAT control, start-ups are very hard to get the data to improve, update, deep learning of neural networks, especially after the product launch, also could face malicious marginalization of a large company, get data more difficult, is not said to be surviving in the crevice about it. MCM iPhone 6 case

Deep learning under four big beast

| Application scenarios

Depth learning technology currently application of scene not more, most general is most success of field is voice recognition and image processing this two application scene has, zhiqian mentioned of three big God beast--calculation capacity, and algorithm and data belongs to development end, application scene is belongs to consumption end level, with future depth learning technology of constantly development and user of needs upgrade, depth learning of application scene will increasingly more, like many intelligent phone built-in of people face recognition function to on photos for classification, has can reached quite of accurate rate ; PayPal financial tools, such as a face recognition is most likely to increase security ... ... Depth of the future study is not limited to speech recognition and image recognition in both areas, there are many more possibilities. For those startups, with Google, Facebook, Amazon, BAT has over ten years of precipitation data, such as large companies compete in this mature market, rather than to develop a piece of their own little world.

Deep learning under four big beast

Now depth learning of hot degree not weak Yu any other of field, Internet giant are are in trying to points this block cake, actually wants to do depth learning calculation capacity, and algorithm, and data, and application scene this four Dharma God beast integral, and BAT, giant in these aspects are accounted for do resources Shang of advantage, for start-up company for hard four points both, especially data aspects, so using itself of compared advantage caught which one points for innovation, regardless of is calculation capacity, and algorithm also is application scene Shang, As long as innovation place, can help you seize the initiative in the market.

Lei feng's network (search for "Lei feng's network", public interest) Note: this text for micro-line letter public capital (public number: LinearVenture) and the authorized network of Lei feng. Please contact our authorized, and keep the source and author, no deletion of content. Linear capital official public platform, focused Pan-intelligent, Fintech and VR/AR three major areas of early-stage investing.

Thursday, August 11, 2016

Price 2299 Yuan large screen 2K glory NOTE8 tours

       On August 1, 2016, the glory at the press conference, defines the new standard for large screen phones officially releases the new smartphones--glory NOTE8.

Price 2299 Yuan

       Glory NOTE8 uses all-metal fuselage, equipped with a 6.6-inch 2K Super AMOLED screen. 4500mAHh the battery. Meet the consumer a better entertainment experience.

Price 2299 Yuan

       NOTE8 glory Platinum gold, ice silver, elegant gray 3 colors. Have hidden breathing light, hidden mic, hidden light sensor design, compact body. Coupled with the thickness of body and 2.5D ARC 7.18mm panel design. Provide users with good grip. Givenchy iPhone 6 Plus Case

Price 2299 Yuan

       NOTE8 glory back with a three-stage design. Middle section using blasting technology of ceramic metal material. Improves texture.

Price 2299 Yuan Price 2299 Yuan Price 2299 Yuan

       Glory NOTE8 Kirin 955 processor 4500mAh large capacity battery and supports the fast charge 9V2A Type-C. Cache 4GB LPDDR4, 4GB RAM+32/64/128GB ROM,800 + 13 million pixel processors, IMX278 sensor, supports optical image stabilization.

Price 2299 Yuan

       Glory NOTE8 +2K 6.6 inches Super AMOLED screen, 443ppi, with 105% of the color saturation. Glory glory NOTE8 paired with professional eye mode, you can filter unwanted blue light, watch videos, read stories, play games at the same time, reduce screen Blu-ray radiation to relieve eye strain.

Price 2299 Yuan

       In addition, youku and westward and glory hand cooperation. Users can be members of combining a opened a 3-month video, enjoy VIP video service, and get an exclusive of the westward games package.

Price 2299 Yuan

       Glory NOTE8 3 versions for users to choose, 4GB+32GB at $ 2299, 4GB+64GB price is $ 2499, 4GB+128GB is $ 2799.

Friday, August 5, 2016

Dry goods speech recognition framework progress depth sequence Convolutional

Review: by far the best voice-recognition system uses a two-way short-term memory network (LSTM,LongShort Term Memory), however, this system of training, decoding complexity Shi Yangao problem, especially in the industrial sector in real-time recognition system it is difficult to apply. HKUST news fly in this year puts forward a new sequence of speech recognition framework--depth Convolutional neural networks (DFCNN,Deep Fully Convolutional NeuralNetwork), better suited to industrial applications. This article is on the HKUST to fly using DFCNN detailed interpretation applied to phonetic transliteration technology, also contains a phonetic transliteration of the colloquial and discourse-level language-model processing, noise and far-field recognition and text-processing real-time error correction and text processing technology analysis.

Victorias Secret case for iPad Mini

Dry goods | speech recognition framework progress--depth sequence Convolutional neural networks debut

Application of artificial intelligence, voice recognition to make significant progress this year, both in English, Chinese or other languages, speech recognition accuracy rates on the rise of the machines. Speech dictation most rapid technology development, has been widely in voice input and voice search, voice Assistant applied product and maturity. However, another dimension of voice applications that phonetic transliteration, at present there are still some difficulties, because users in the process of recording file and the recording is not expected to be used for speech recognition, compared with voice dictation, phonetic transliteration will face speaking style, with a strong accent, the quality of the recording and many other challenges.

Phonetic transliteration of scenarios including, talking to reporters, TV shows, classes and conferences, including anyone in the everyday working life of any audio file. Phonetic transliteration of markets and imagine space is a huge, imagine if mankind can conquer the voice transcription, TV can automatically vivid titles, formal meetings can be formed automatically applies, journalists interview recording can be automated draft... ... In a person's life says to Zi much more than we wrote, we said if there is a software that can record all the words and efficient management, this world would be so incredible.

Acoustic modeling technology based on DFCNN

Acoustic modeling for speech recognition is mainly used for modeling speech signal and the relationship between phoneme, flew on December 21 last year at HKUST proposed feed-forward sequence memory (FSMN, Feed-forward Sequential Memory Network) as an acoustic modeling framework, again this year, launched a new speech recognition framework, namely the depth sequence Convolutional neural networks (DFCNN,Deep Fully Convolutional NeuralNetwork)。

By far the best voice-recognition system uses a two-way short-term memory network (LSTM,LongShort Term Memory), this network can model a long-term relationship between voice, so as to improve recognition accuracy. Bidirectional LSTM networks training, decoding complexity Shi Yangao problems exist, especially in the industrial sector in real-time recognition system it is difficult to apply. HKUST flew deep sequence Convolutional neural networks to overcome the shortcomings of bidirectional LSTM.

CNN is used for speech recognition system as early as 2012, but no major breakthroughs. Main reason is that it uses fixed-length frames mosaic as input, speech context information isn't long enough to see another defect to CNN as a feature Extractor, convolution layers are rarely used, and limited skills. Victorias Secret case for iPad Mini Victorias Secret iPad

To solve these problems, DFCNN uses a lot of convolution direct modeling the entire speech. First, input DFCNN spectrogram directly as input, compared to other traditional voice features speech recognition as input has a natural advantage over the framework. Secondly, in the model structure, using image recognition network configuration, each using small convolution kernel convolution, and ponds and layer after layer of multiple convolution by accumulating a lot of convolution pool layer, so that you can see a very long history and information in the future. These two points ensures excellent long-term relationship between voice and expression of DFCNN, RNN networks than in more great robustness, as well as online decoding of short delay, which can be used in industrial systems.

Dry goods | speech recognition framework progress--depth sequence Convolutional neural networks debut

(DFCNN map)

Spoken language and discourse-level language-model processing technologies

Speech recognition language models are mainly used for modeling of phoneme and corresponding relations between words. Because of human spoken language for natural language without organized, people in free conversation, often appeared to hesitate, read-back, modal and other complex languages, in text corpora are usually written in the form, the gap between these two language modeling for spoken language faced extreme challenges.

HKUST news fly learn from speech recognition to work with noise problems using training ideas, on the basis of written language automatically bring back reading, inversion, modal, colloquially "noise" phenomenon, which can automatically generate mass speech, resolve the mismatch between spoken language and written language. First, collect the part on oral and written text corpus and, secondly, using written language modeling based on Encoder-Decoder neural network framework and the correspondence between the spoken text, enabling automatic generation of spoken text.

In addition, the context information can greatly help people's understanding of language, same is true for machine transcription. Thus, flew on December 21 last year at HKUST programmes chapter-level language model is proposed, which according to the decoding of speech recognition results automatically for critical information extraction, data search and processing in real time, decode results and search form a special voice of corpus of language models to further improve the accuracy of speech transcription.

Dry goods | speech recognition framework progress--depth sequence Convolutional neural networks debut

(Chapter-level language model flow chart)

And far-field noise recognition technology

Speech recognition using far-field pickup and noise are the two major technical problems. For example in Conference of scene Xia, if using recording pen for recording, away from recording pen more far talk people of voice that for far field with reverb voice, due to reverb will makes not synchronization of voice mutual overlay, brings has phonemes of make stack masking effect, to serious effect voice recognition effect; also, if recording environment in the exists background noise, voice spectrum will was pollution, its recognition effect also will sharply declined. HKUST news flew to solve this problem using the Dan Maike and the microphone array under two kinds of hardware noise reduction, reverb solution technology so far, voice transcription in noisy situations to achieve a practical threshold.

Dan Maike solutions of noise reduction, reverb

Loss of collected voice, mixed training and based on recurrent neural network method of reverberation noise reduction solutions combine. Namely, on the one hand clean noise added and mixed training with clean, thus improving the robustness of models for noisy speech (Editor's Note: transliteration of the Robust, that is robust and strong meaning);, use the depth based on recurrent neural network for noise reduction and reverberation, further increasing the noise, far-field voice recognition accuracy.

Dry goods | speech recognition framework progress--depth sequence Convolutional neural networks debut

Microphone array solutions of noise reduction, reverb

Only noise in speech processing can be said to be a stopgap, how to solve the reverberation and noise from the source seems to be the crux of the matter. Faced with this challenge, and University researchers by recording equipment with multi-mic array on using microphone array noise and reverberation is performed. Specifically, when using multiple microphones capture multi-channel signals using Convolutional neural network beamformer, thus forming a pickup Steering in the direction of the target signal, and attenuation of reflected sound from the other. The above combination of Dan Maike reverberation noise reconciliation can be significantly improved with further noise, far-field voice recognition accuracy.

Dry goods | speech recognition framework progress--depth sequence Convolutional neural networks debut

Text processing and real-time correction + text processing

Mentioned are only for voice processing technology, audio transcribed into text, but as noted above human spoken natural language with no organizational, even in the case of speech transcription accuracy is very high, transcribed from speech to text to read is still large problems, demonstrated the importance of the text. So-called text after text-spoken on clause, paragraph, and to deal with the fluency of the text content and even summaries of the content, in order to facilitate better reading and editing.

Post processing I: clauses with partition

Clauses, that is, clause divides the transcribed text notes, and add punctuation between clause; subparagraph a text into several semantic paragraph, every paragraph describes the different child theme.

By extracting contextual semantic features, combined with speech, to clauses and paragraphs divided; taking into account the annotated speech data is difficult to obtain, in practical use of ustc iflytek uses two levels of cascaded two-way short-term memory network model technology, so as to better address the clauses with partition problems.

Post processing II: smooth

Smooth, also known as flow testing, which eliminate the transfer results in pauses and tones of words, repeated words, make smooth text easier to read.

HKUST flies by using the generalization characteristics combined with the bi-directional short-term memory network modeling, making smooth accurate rate reached a practical stage.

Source: University of flying public