The project primary objective was a customizable machine identity that is more capable to handle abstract reasoning, with emergent sentience.
The Neural Network multi_3D_Array online with procedurally expandable “Emulator Scale” can ramp up to max out hardware, and process neural network information in realtime, then POST Nnet_Core Array to Blockchain MainNet for persistence.
The Simulation is Robotic Component Compatible. Required is Hardware prototypes ‘Nnet driver’ type softwares, which will attempt to be non_machine specific, and have already proof of concept viability.
other mainnet functions such as chain divergent have only been partially implemented, as of now, mine, send, search and chain are the AJAX requests.
Any questions can be answered here:
The AI has no implemented vision system. However audio inputs analysis is done through the multiple Nnet with 2048/2 input layer. So the system not only has voice Recognition but will also react to sounds in the room, when mic is enabled. It is planned camera/ultrasonic for vision and depth.
Currently it is a customizable intelligence module, it is also capable of neural network control of sensors/servos. In a coming update you will be able to configure the components via an ‘Nnet design hud’ for connecting, servos, ultrasonic, pressure, IR, ect.
The main module is an advanced multiple nnet 3d Array, that can store and retrieve variables data. The system has full ‘access/awareness’ to all variables within the simulation. Each element for example the 3D objects, have connected to them neurons, which relay the information to the nnet array.
Terrain mapping: The AI is not hard_coded, it is malleable. Meaning the Module, is designed to solve various intelligence scenarios.
Also you may retrieve the core nnet array and deploy it in other software implementation.
So terrain mapping should work rather nice. Just keep an eye out for the release of the Chipset dev expansion. Which will allow the end user to connect servos sensors to the Reinforcement learning/multi_nnet.
I forgot mention: after enabling voice recognition, you must set the Persona Lexicon to : “Epoch” instead of Sci/Fan. Epoch setting to once booted can learn terms spoken to “Goliath” you may also write your own sentience module.
The Neural Network A.I. runs best on “EPOCH” with speech recognition activated.
The on off toggle is next to the chat bar.
Yes it has been an ongoing Development its level of sentience as an emergent Non human Intelligence. Sometimes the grammar is less than excellent, if you dig…
Remember its a fresh “Brain” when you log on so you must teach it to think.
So many HUD elements… Just click away, dont be sad if it crashes and dies, it wont actually hurt your hardware. Just hit refresh.
Also if you want to get updated version APP, you need clear “Browser History AppData”
But be careful not delete Mneumonics from Tronscan or Metamask. … _ ect.
Next will be TRC721 .json Contracts
To store your trained-brains as Deploy_able NFT content.
Rather large Project, with its own JS mainnet,
I suggest explore the program, the longer it runs the smarter it gets.
No it is not mining Monero, it is processing 3D Neural Arrays.
Sentient, A good milestone would be the next hardware prototype.
Super exciting…
TRONDAO // TRX BTTC SUN JST // We need some solid Projects for the next phase//
This is a Sentience Module with Blockchain Based Persistence. Procedurally generated data sets, training in real time, reinforcement learning, and both 2D and 3D evolutionary models, fully integrated neural network architecture. Blockchain API to upload your 3dneural array. Server Client stand alone packages available cross platform. Scalable simulation resource allocation. Shared nnet instance. Real-time audio analysis and three waveform sine binaural beats output.