资讯详情

gpt mbr ext3_gpt 3,一开始是单词1 2

gpt mbr ext3

30秒摘要 (30-Second Summary)

  • GPT-3, born in may, creates fear and excitement in community of developers and digital workers. Many are expressing their astonishment and a first wave of powered applications are emerging like produce human-like texts.

    GPT-出生于5月,在开发人员和数字工作者社区感到恐惧和兴奋。 很多人都在表达自己的惊喜,第一轮强大的应用正在兴起,就像人类文字的制作一样。
  • It seems to me that simply explaining the principles of Machine Learning, inspired by the brain and which seduces a little more every day by its extraordinary capacities, makes it possible to maintain a critical eye on this astonishing technology.

    在我看来,简单解释机器学习的原理是受大脑启发的,因为它非凡的能力每天都很吸引人,所以它可以批评这种惊人的技术。
  • GPT-3 is trained with all data from Internet. GPT-3 is an unsupervised learning algorithm using Generative Adversarial Network (GAN).

    GPT-3接受了来自Internet培训所有数据。 GPT-3.使用生成对抗网络(GAN)无监督学习算法。
  • The brain has an incredible architecture to comprehend the world. Parameters in Machine Learning are inspired by biological neurons. The brain and the Artificial Neuronal Networks (ANN) are similar but not identical. The brain is an order of magnitude much more complex than an ANN.

    大脑有一个令人难以置信的系统结构,可以理解世界。 生物神经元启发了机器学习的参数。 大脑和人工神经元网络(ANN)相似但不同。 大脑比神经网络复杂一个数量级。
  • GPT-3 pass a certain type of Turing test but it is not yet as human-like intelligence. One essential thing is missing: emotions.

    GPT-3通过了某种类型的Turing但没有人类那么聪明。 缺少一件事:情感。

GPT-3 (And In The Beginning Was… GPT-3)

May 28, 2020. The was officially released, in form of a scientific publication, and is in beta testing as of July 2020. It is a natural language processing (NLP) neural network created by OpenAI.

2020年5月28日。GenerativePre 2020年7月以科学出版物的形式正式发布Beta测试。这是自然语言处理的创建 (NLP)神经网络OpenAI。

OpenAI is an artificial intelligence (AI) research lab that was once sponsored by SpaceX and Tesla CEO, Elon Musk. GPT-3 makes developers, geeks and techno-skeptics around the world fantasize and shudder with is ability to imitate human beings.

OpenAI是人工智能(AI)研究实验室,曾经由SpaceX首席执行官和特斯拉Elon Musk赞助。 GPT-让开发商、爱好者和世界各地的技术——怀疑论者的幻想和不寒而栗的能力模仿人类。

Why this technology has captivated people imagination ? Why some worried it could prove itself to be dangerous? Over the past few weeks, a number of text samples generated by GPT-3 have started to circulate on social networks, such as generate website layout, translate English to LaTeX equations, answer to Excel functions, write SQL queries, generate SVG graphs or even create React app that describres itself.

为什么这项技术吸引了人们的想象力? 为什么有人担心证明自己很危险? 过去几周,由GPT-许多生成的文本示例已经开始在社交网络上传播,比如生成网站布局 , 英语翻译成LaTeX方程 , 回答Excel函数 , 编写SQL查询 , 生成SVG甚至创建自己描述的图片,甚至创建自己描述的图片React应用 。

One application that has been talked about the most is the generation of this article, “GPT-3 may be the biggest thing since bitcoin”, that explains how GPT-3 could become the next new disruption. This text was produced by GPT3 itself! How can a machine do these things? How can a machine do these things? And do they replace us? Is this the beginning of a new era for the machine intelligence? Fear and excitement settle in minds …

本文生成了最受关注的应用程序, GPT-3可能是比特币以来最大的事情 这个解释了GPT-如何成为下一个新的中断? 这段话是由的GPT自己做的! 机器怎么做这些事? 他们会取代我们吗? 这是机器智能新时代的开始吗? 恐惧和兴奋浮现在脑海中……

But fear and excitement have always sold more than dreams, because our brain, out of survival instinct, records more facts and ideas that can threaten our species. When something is scary, perhaps the best is to demystify it by trying to understand its inner workings.

但恐惧和兴奋总是卖出比梦想更多的东西,因为出于生存本能,我们的大脑记录了更多可能威胁到我们物种的事实和想法。 当某件事令人恐惧时,最好的办法就是通过试图理解它的内在原理来神秘。

机器学习原理 (Machine Learning Principles)

GPT-3 uses deep learning, is part of machine learning (ML) methods, to produce human-like text like translation, the spelling correction and auto-completion of sentences. But also it can produce answers to open questions, generate texts imitating the style of famous authors or code web pages, and even a surprising ability to solve arithmetic problems. All without human supervision.

GPT-3使用深度学习 (这是机器学习 (ML)方法的一部分)产生类似人的文本,如翻转,句子的拼写更正和自动完成。 但是它也可以产生悬而未决的问题的答案,生成模仿著名作者或代码网页样式的文本,甚至具有惊人的解决算术问题的能力。 所有这些都无需人工监督。

Machine learning is using data to answer questions, as defined by Yufeng Guo, developer and advocate for Google Cloud Platform.

机器学习使用数据来回答问题 ,这由Google Cloud Platform的开发者和倡导者Guo Yufeng Guo定义。

数据是燃料 (Data is the fuel)

When we go into detail, Machine Learning can be broken down into seven steps, adding an upfront step an hypothesis or a business problem to solve:

当我们详细讨论时,机器学习可以分为七个步骤 ,为要解决的假设或业务问题添加前一步:

(Hypothesis →) Data collection → Data preparation → Choosing ML Model → Model training → Evaluation → Hyper-parameter tuning → Prediction

(假设→)数据收集→数据准备→选择ML模型→模型训练→评估→超参数调整→预测

If we simplify it:

如果我们简化一下:

Training data → Model → Prediction (or inference)

训练数据→模型→预测(或推断)

How much data do we need to train this model? Millions! As much as we can afford the servers and the computing power to train the model.

我们需要多少数据来训练此模型? 百万! 我们尽力提供服务器和计算能力来训练模型。

For example, if I had a dataset with two variables, age (input) and height (output), I could implement a supervised learning model to predict a person’s height based on his age.

例如,如果我有一个包含两个变量的数据集,即年龄(输入)和身高(输出),我可以实现一个监督学习模型,以根据一个人的年龄预测其身高。

What happens when the data or inputs are not good quality? It is possible the model generates invalid predictions.

如果数据或输入的质量不好,会发生什么? 该模型可能会生成无效的预测。

Training data → Model → Invalid prediction

训练数据→模型→无效的预测

In general, we will then seek to quantify invalid predictions made by the model and to minimize it. When a margin of invalid predictions considered acceptable has been reached, the learning phase will be considered to be over and that the values of the model parameters are optimal.

通常,我们将试图量化模型做出的无效预测,并将其最小化。 当达到了可以接受的无效预测的余量时,学习阶段将被视为结束并且模型参数的值是最佳的。

GPT-3 is trained with all Internet data. All, or almost. In fact, it’s a large part of Internet data, saved each month by Common Crawl (an open repository of web crawl data that can be accessed by anyone), which has used to train this algorithm using the “Transformer” principle invented barely three years ago by the engineers of Google.

GPT-3接受了所有Internet数据的培训。 全部或几乎。 实际上,它是Internet数据的很大一部分,每个月都由Common Crawl (一个开放的Web爬网数据存储库,每个人都可以访问)保存,该网络过去使用了仅用了三年时间发明的“ Transformer ”原理来训练该算法之前是Google的工程师。

This artificial neural network (ANN) is just trained to predict the next word in a gigantic linguistic corpus of several billions sentences, where all biggest encyclopedia in the world, Wikipedia, represents only 3% of this corpus. The rest comes from digitized books and various web links. That means the GPT-3 training data includes not just things like news articles, recipes, and poetry, but also coding manuals, fiction, religious prophecies, and whatever else we can imagine. Any type of text that has been uploaded to the Internet has likely become a useful element for the GPT-3 patterns. And it also includes other bad sources like pseudoscientific books, conspiracy theories…

这个人工神经网络(ANN)经过训练,可以预测数十亿个句子的巨大语言语料库中的下一个单词,而世界上所有最大的百科全书Wikipedia都仅占该语料库的3%。 其余部分来自数字化书籍和各种网络链接。 这意味着GPT-3训练数据不仅包括新闻报道,食谱和诗歌之类的内容,还包括编码手册,小说,宗教预言以及我们能想到的任何事物。 已上传到Internet的任何类型的文本都可能已成为GPT-3模式的有用元素。 而且还包括其他不良资源,例如伪科学书籍,阴谋论……

It is about a method allowing to contextualize much more deeply the meaning of each word, thanks to considering the position of the word in the sentence. An additional attention mechanism which allows the algorithm to put in relation to distant linguistic units, so as to make the link between subject, verb and direct object in a long sentence or take in consideration the semantic context linking the different sentences of the same paragraph.

这是关于一种方法,通过考虑单词在句子中的位置,可以使每个单词的含义更深入地上下文化。 一种附加的注意机制 ,允许算法将其与遥远的语言单元联系起来,以便在长句子中建立主语,动词和直接宾语之间的链接,或考虑链接同一段落的不同句子的语义上下文。

无监督学习 (Unsupervised learning)

In machine learning, the tasks can be classified in two categories: supervised or unsupervised learning problems.

在机器学习中,任务可以分为两类:有监督或无监督学习问题 。

For the first one, the principle is very simple. You start by entering the algorithm of basic explicit criteria called “labelled” data. Then you train the algorithm, and you correct it in case of wrong answer. You use a teacher, a “supervisor”. This is called “supervised learning”. After a certain time, the algorithm will have developed categorization criteria by itself. So this method consists of teaching a function to match an input to an output based on known examples (input-output pairs).

对于第一个,原理非常简单。 首先输入称为“标记”数据的基本显式标准算法。 然后训练算法,并在错误答案的情况下对其进行纠正。 您使用一位老师,即“主管”。 这被称为“监督学习”。 一定时间后,该算法将自行制定分类标准。 因此,此方法包括根据已知示例(输入-输出对)教导将输入与输出匹配的函数。

In contrast to supervised learning, unsupervised learning is a machine learning technique where you do not need to supervise the model. This technique consists in having only input data and no corresponding output variables. The algorithm discover through “chaotic” and unlabelled data. The goal is to model the underlying structure and distribution in data to learn more about the data. It helps to finds all kind of unknown patterns in data. This technique allow you to perform more complex tasks and can be more unpredictable compared to other methods.

与监督学习相反,无监督学习是一种不需要监督模型的机器学习技术。 该技术在于仅具有输入数据而没有相应的输出变量。 该算法通过“混乱”和未标记的数据发现。 目的是对数据的基础结构和分布建模,以了解有关数据的更多信息。 它有助于查找数据中的所有未知模式。 与其他方法相比,此技术使您可以执行更复杂的任务,并且更加不可预测。

Generative Pre-trained Transformer is an unsupervised learning algorithm using generative adversarial network (GAN). The father of GAN concept, Ian Goodfellow, moved to Apple in March in a director role, was one of the top minds in AI at Google and named of MIT’s Innovators Under 35 in 2017.

生成式预训练变压器是一种使用生成式对抗网络(GAN)的无监督学习算法。 GAN概念之父伊恩·古德费洛(Ian Goodfellow) 于3月担任苹果公司董事一职,是谷歌人工智能领域的顶尖人物之一,并于2017年被麻省理工学院评为35岁以下创新者 。

GAN is a framework in which two neural networks compete to become more accurate in their predictions. The generative model is pitted against an adversary. A generator learns to generate samples. A discriminator learns to distinguish the generated samples from real samples (just a basic true/false in output). The generator is learning to produce more and more realistic samples to trick discriminator while discriminator becomes better and better distinguishing generated samples from real.

GAN是一个框架,其中两个神经网络竞争以使其预测更加准确。 生成模型与对手对抗。 生成器学习生成样本。 鉴别器学习将生成的样本与真实样本区分开(仅输出中的基本真/假)。 生成器正在学习生成越来越多的逼真的样本来欺骗鉴别器,而鉴别器变得越来越好,从而将生成的样本与真实样本区分开。

We are experiencing more of an important bifurcation in AI than its revolution. The power of our supercomputers allows GANs, through trial and error, to explore solution spaces of almost infinite size. Ok, but is this how the brain really works?

在AI方面,我们正在经历的不仅仅是革命,更是重要的分叉。 超级计算机的强大功能使GAN可以通过反复试验探索几乎无限大小的解决方案空间。 好的,但这是大脑的真正运作方式吗?

奇妙的人类认知建筑 (The Marvelous Human Cognitive Architecture)

AI的人类对应物 (The human counterpart of AI)

Artificial neural networks (ANNs) are inspired by the functioning of the human brain. It is a mathematical and algorithmic modeling which simulates as close as possible to current knowledge these computational units that we have by the billions in each of us.

人工神经网络(ANN)受人脑功能的启发。 这是一个数学和算法模型,它模拟了我们每个人数十亿拥有的这些计算单位,以尽可能接近当前的知识。

The human brain as many neurons which pass a potential action to an axon, creating a neural synapse. It has billions of neurons that are interconnected via neuronal synapses.

人脑将尽可能多的神经元传递给轴突,从而产生神经突触。 它具有数十亿个神经元,它们通过神经元突触相互连接。

Neurons (or nerve cells) are specialized cells that transmit and receive electrical signals in the body. Neurons are composed of three main parts: dendrites, a cell body, and an axon. Signals are received through the dendrites. This signal travel to the cell body, and continue down the axon, until it reach the synapse. The synapse is the region of interaction between two neurons that allows a signal to pass. A neuron is a computational unit which have 1 or more inputs and a calculated output.

神经元(或神经细胞)是在体内传输和接收电信号的专门细胞。 神经元由三个主要部分组成:树突,细胞体和轴突。 信号通过树突接收。 该信号传播到细胞体,并继续沿着轴突,直到到达突触。 突触是两个神经元之间相互作用的区域,允许信号通过。 神经元是具有1个或多个输入和一个计算出的输出的计算单元。

From a computational point of view, an artificial neuron is just a mathematical and computational representation of a biological neuron. The use of a neuron is very limited because it is “binary”. It can only separate 2 sets of inputs / outputs. It’s great for doing basic things, binary calculus, comparison, memory, “linear” decision but not for more complex problems. That is why it is associated with other neurons to create an ANN. This ANN is organized from one or more layers of neurons. These neural layers are connected to each other, following several different topologies, each layer comprising several neurons.

从计算的角度来看, 人工神经元只是生物神经元的数学和计算表示。 神经元的使用非常有限,因为它是“二进制”的。 它只能分离2组输入/输出。 这对于执行基本操作,二进制演算,比较,内存,“线性”决策非常有用,但不适用于更复杂的问题。 这就是为什么它与其他神经元相关联以创建ANN的原因。 该ANN由一层或多层神经元组成。 这些神经层遵循几种不同的拓扑结构相互连接,每一层包含多个神经元。

In ML, a neuron is called a parameter. But a model parameter is not the same than biological a neuron. A biological neuron is not as simple as an artificial neuron, it takes many anatomic forms with several shapes and structures such as Basket, Pyramid, Betz cells and so on. Based on the functions performed by neurons, they are divided into three basic types; afferent (sensory) neurons, efferent (motor) neurons, and interneurons.

在ML中,神经元称为参数。 但是模型参数不同于生物学的神经元。 生物神经元不像人工神经元那样简单,它具有许多具有多种形状和结构的解剖形式 ,例如Basket,金字塔,Betz细胞等。 根据神经元执行的功能,它们分为三种基本类型: 传入(感觉)神经元,传出(运动)神经元和中间神经元。

A parameter is a configuration variable. It is internal to the ML model and this value can be estimated from data. Parameters are required by the model when making predictions.

参数是配置变量。 它是ML模型的内部元素,可以从数据中估算该值。 进行预测时,模型需要参数。

An ANNs uses a lot of parameters. Parameter is just a value and a weight of relevance. A biological neuron is not neither a switch, true or false, 0 or 1. Neurons are multiple neurotransmitters (dopamine, serotonin…) and receptors. A neuron is an order of magnitude more complex than a parameter.

人工神经网络使用很多参数。 参数仅仅是相关性的值和权重。 生物神经元既不是开关,是非非是0还是1。神经元是多种神经递质(多巴胺,5-羟色胺……)和受体。 神经元比参数复杂一个数量级。

Our brain has the most sophisticated cerebral architecture. This marvelous architecture and the mental chains that it underlies form the foundations of human intelligence. This architecture allows us abstraction. It allows us to comprehend the world and adapt it to our needs. The number of synapses in the brain is known much less precisely, but it is probably about 100 trillion synapses. What sets GPT-3 apart from previous versions is its size. GPT-3 has 175 billion parameters, increasing the model capacity of his predecessor, GPT-2, from 1.5 billions to 175 billions.

我们的大脑拥有最复杂的大脑结构。 这种奇妙的架构及其所基于的思维链构成了人类智能的基础。 这种架构允许我们抽象。 它使我们能够理解世界并使之适应我们的需求。 人们对大脑中突触的数目的了解要精确得多,但可能约为100万亿个突触 。 GPT-3与以前的版本的不同之处在于它的大小。 GPT-3具有1,750亿个参数,将其前身GPT-2的模型容量从15亿个增加到1,750亿个。

学习速度 (Learning speed)

When we study a bit the brain, we figure out that a brain is a much more complex mechanism. We can’t compare machine intelligence with the human brain but part of system is similar.

当我们稍微研究一下大脑时,就会发现大脑是一个更为复杂的机制。 我们无法将机器智能与人脑进行比较,但是系统的一部分是相似的。

The brain-like qualities of neural networks are sometimes, on specific tasks, more powerful and more efficient than a human. When developing a neural network, a number of global parameters are often taken into account in order to improve accuracy. the optimization function: gradient and its “learning speed”, the size of the batch, that is to say the number of data used to train the network.

在特定任务上,神经网络的类似于大脑的特质有时比人类更强大,更高效。 在开发神经网络时,通常会考虑许多全局参数,以提高准确性。 优化功能:梯度及其“学习速度”,批的大小,即用于训练网络的数据数量。

Most human learning during decades with teachers, courses, books, friends, mentors... GPT-3 has been trained with all Common Crawl dataset in less than a year !

几十年来,大多数人类学习都是与老师,课程,书籍,朋友,导师... GPT-3一起用不到一年的时间对所有Common Crawl数据集进行的培训!

AI模仿人的大脑有多近? (How close is AI mimicking the human brain?)

That helps to understand a little better the closeness level between machine and human intelligence. One of the most spectacular skills of GPT-3 is the success of the famous Turing test. Or rather, the success of a particular version of the Turing test since there are many variations of it today. Can you build an A.I. that can convincingly pass itself off as a person? OpenAI researchers invited more than 600 participants to assess whether short journalistic texts were generated by artificial intelligence or by a human being.

这有助于更好地了解机器与人类智能之间的紧密程度。 GPT-3最出色的技能之一就是著名的图灵测试的成功。 确切地说,图灵测试的特定版本取得了成功,因为当今它有很多变体。 您能否构建一个可以令人信服地成为一个人的人工智能? OpenAI研究人员邀请了600多名参与者来评估简短的新闻文字是由人工智能还是由人类生成。

The result is final. The texts generated by the most evolved version of the algorithm are indistinguishable from those generated by real journalists. The results can be technically impressive, and also fun or thought-provoking, as the poems, code, and other experiments attest.

结果是最终的。 该算法的最新版本所生成的文本与真实新闻工作者所生成的文本没有区别。 正如诗歌,代码和其他实验所证明的那样,结果在技术上令人印象深刻,并且也很有趣或发人深省。

The real question behind this test is not: “is there a difference between human and machine?”. But well: “how far does a simulating artifact (designed, invented and programmed by human intelligence ) can it be delusional?” Because it is indeed an illusion such as a representation or a simulation, not reality.

该测试背后的真正问题不是:“人与机器之间是否有区别?”。 但是很好:“(由人类智能设计,发明和编程的)模拟人工制品能产生多大的幻想?” 因为这确实是一种幻想,例如表示或模拟,而不是现实。

In fact, one of major open problems in AI is train an AI with less data and less steps. Poorly trained learning may not seem like a big deal, but it is one of the main open issues in AI. Human beings can learn a new task by being shown only a few times. Lucky for us, kids don’t need to see a million car photos before they can reliably recognize cars on their own. This ability to learn complex tasks from just a few examples has so far escaped machines, despite the efforts of researchers. The deep neural networks’ data thirst is a major drawback, because for many tasks there is not a lot of data available and creating new, labeled training sets is expensive. Poorly trained learning, if it worked well, would democratize the use of AI in many more areas than is currently the case.

实际上,AI中的主要开放问题之一是用更少的数据和更少的步骤来训练AI。 训练有素的学习似乎没什么大不了的,但这是AI中主要的开放性问题之一。 人类只能被展示几次就可以学习一项新任务。 对我们来说幸运的是,孩子们不需要自己看一百万张汽车照片就能可靠地识别出自己的汽车。 尽管研究人员做出了很大努力,但仅通过几个示例就可以学习复杂任务的能力至今仍无法实现。 深度神经网络的数据渴求是一个主要缺点,因为对于许多任务而言,可用数据并不多,而且创建新的,带有标签的训练集非常昂贵。 如果学习效果不佳,则训练不足的学习将会使AI在比目前更多的领域民主化。

GPT-3 doesn’t solve learning in a few steps, but it opens up an amazing direction of development. If increasing the size of the model so drastically improves the performance of a few shots, then maybe increasing the scale another 1000 times (the difference between GPT-2 and GPT-3) would bring the performance of a few shots. close, more or less superior to the human level. If scale is really the solution to human intelligence, then GPT-3 is still around 1000 times too small. This assumes that the synaptic connections map roughly one to one with the parameters of the neural network, which of course is not the case since it has been seen that human neurons are more complex than their artificial counterparts.

GPT-3并没有一步步解决学习问题,但它开辟了一个令人惊奇的发展方向。 如果增加模型的大小以极大地改善几张照片的性能,则可能再增加1000倍(GPT-2和GPT-3之间的差异)将带来几张照片的性能。 接近或高于人类水平。 如果说规模真的是解决人类智能的方法,那么GPT-3仍然太小1000倍。 这假定突触连接与神经网络的参数大致一对一映射,但事实并非如此,因为已经发现人类神经元比人工神经元复杂。

心灵的智慧 (The intelligences of heart)

It is from this complexity, this marvelous brain architecture, what we call “intelligence” derives. The notion of human intelligence is based on a shared intuition, according to which it is easy to distinguish those individuals whom everyone calls intelligent from those who are much less intelligent. But we can very well speak of the intelligence of plants, if we consider that intelligence is an emerging property of evolutionary biology, just as our human intelligence is an emerging property of the chemistry of our neurons.

正是从这种复杂性,这种奇妙的大脑结构(我们称之为“智能”)中得出。 人类智能的概念基于共同的直觉,根据这种直觉,很容易将每个人都称为智能的人与那些远不那么聪明的人区分开。 但是,如果我们认为智能是进化生物学的新兴特性,就像人类的智能是神经元化学的新兴特性一样,那么我们可以很好地谈论植物的智能。

Human intelligence is simply an emergent property, resulting from the cascade of cerebral, cognitive, genetic and contextual factors that enables a mental representation of things of reality, an abstraction. Intelligence comes from the Latin “intellegentia” (faculty of understanding). Compound derived of two words from the Latin “intellegere” meaning to understand, and whose prefix inter (between), and the radical “legere” (to choose, to pick) or “ligare” (to bind). Intelligence suggest the ability to connect elements who without it would remain separated. Intelligence, in the richest and deepest sense, is the art of relating.

人类智力仅仅是一种新兴属性,它是由大脑,认知,遗传和上下文因素的级联产生的,这些因素使得人们能够对现实事物(抽象的事物)进行心理表征。 智力来自拉丁语“ intelegentia”(理解系)。 源自拉丁语“ intellegere”的两个单词的复合词,意为理解,其前缀为inter(之间),以及部首的“ legere”(选择,选择)或“ ligare”(绑定)。 智力暗示了连接元素的能力,如果没有元素,这些元素将保持分离。 从最丰富和最深刻的意义上讲,情报是关连的艺术。

The capacities and talents can be multiple; one who excels in handling the subtleties of language may use abstract reasoning less well, while another brilliant in mathematics is incapable of managing his daily life. There are many dimensions of intelligence. We must beware of simplistic cuts. It is still necessary to cultivate the intelligences. There are myriads of modes of expression of these intelligences: emotional, practical, spiritual… That’s what makes the definition of intelligence so tricky.

能力和才能可以多种多样; 擅长处理语言细微之处的人可能不太善于使用抽象推理,而另一位精通数学的人则无法管理自己的日常生活。 智力有很多方面。 我们必须提防简单化的削减。 仍然需要培养智力。 这些智力的表达方式多种多样:情感的,实践的,精神的……这就是使智力的定义如此棘手的原因。

These multiple dimensions influence each other. It is perfectly impossible to separate in man his reasoning faculties from his affective faculties. All reasoning is always linked to astonishment, joy, frustration, spite. In short, an emotion.

这些多个维度相互影响。 要把人的推理能力和情感能力分开,是完全不可能的。 所有推理总是与惊讶,喜悦,挫败和恶意联系在一起。 简而言之,一种情感。

The idea in the book of Antonio Damasio, Descartes’ Error, brings an original vision of that the evolutionary process has built neocortical systems of regulation on top of older ones. And how emotions are manifested in close interrelationships between the body and the brain in the perception of objects. In this view, decision making, reasoning, and acute affective responses assist the same purposes-survival than the ancient limbic and endocrine systems.

笛卡尔的《错误 》一书在安东尼奥·达马西奥(Antonio Damasio)的书中提出了一个最初的观点,即进化过程已经在较旧的基础上建立了新的皮质调节系统。 以及在物体感知中,身体与大脑之间紧密的相互关系如何体现情感。 按照这种观点,决策,推理和急性情感React比古代的边缘和内分泌系统在相同的目的生存方面有帮助。

As we go about our lives, frontal mechanisms create associations between images in primary sensory cortex and physiological states of the body. The body, then, is an essential part of the system of thought and social judgment. Then, when we have to decide among competing courses of action, images of potential outcomes evoke the physiological states. Damasio specifies how the body provides fundamental content to mental representations. The body constitutes the frame of reference for our representation of the world, of our relationship to it. In turn, physiological states are themselves subliminally perceived, with the result that alternative actions with negative somatic markings are rapidly rejected and those with positive markings receive extra attention. So, the fact of existing would precede that of thinking, contrary to what Cartesian thought indicates.

在我们的生活中,额叶机制在初级感觉皮层的图像与身体的生理状态之间建立关联。 因此,身体是思想和社会判断系统的重要组成部分。 然后,当我们必须在行动的竞争过程中做出决定时,潜在结果的图像会唤起生理状态。 达马西奥(Damasio)规定了身体如何为心理表征提供基本内容。 身体构成了我们对世界的表示以及我们与世界的关系的参照系。 反过来,生理状态本身会被潜移默化,结果是带有体细胞标记为阴性的替代动作会Swift被拒绝,而具有阳性标记的替代行为会受到更多关注。 因此,存在的事实将先于思想,与笛卡尔思想所表明的相反。

Learn more about AI on Continuous.lu !

在Continuous.lu上了解有关AI的更多信息!

翻译自: https://medium.com/swlh/gpt-3-and-in-the-beginning-was-the-word-part-1-2-38e67633c315

gpt mbr ext3

标签: powere继电器

锐单商城拥有海量元器件数据手册IC替代型号,打造 电子元器件IC百科大全!

锐单商城 - 一站式电子元器件采购平台