نتایج جستجو برای: drl
تعداد نتایج: 1144 فیلتر نتایج به سال:
BACKGROUND Animal studies have shown that exposure to common, low-level environmental contaminants [e.g., polychlorinated biphenyls (PCBs), lead] causes excessive and inappropriate responding on intermittent reinforcement schedules. The Differential Reinforcement of Low Rates task (DRL) has been shown to be especially sensitive to low-level PCB exposure in monkeys. OBJECTIVES We investigated ...
High-dose methamphetamine (METH) causes damage to the dopamine and serotonin neurons in the brains of laboratory animals. The purpose of this report was to determine the long-term consequences of high-dose METH treatment on behavior and neurochemistry. Rats were trained on the differential reinforcement of low-rate 72-s (DRL 72-s) schedule of reinforcement. Twelve weeks after training began (ag...
Learning locomotion skills is a challenging problem. To generate realistic and smooth locomotion, existing methods use motion capture, finite state machines or morphology-specific knowledge to guide the motion generation algorithms. Deep reinforcement learning (DRL) is a promising approach for the automatic creation of locomotion control. Indeed, a standard benchmark for DRL is to automatically...
Although, over the past few years, the application of mammography has risen up sharply in Iran, very little, if any, has been reported of the extent of patient's dose from this type of imaging. The purpose of this study was to establish local diagnostic reference level (DRL) arising from mammography in the great Khorasan province of Iran. It is generally assumed that the glandular tissue is the...
Much of the success of single agent deep reinforcement learning (DRL) in recent years can be attributed to the use of experience replay memories (ERM), which allow Deep Q-Networks (DQNs) to be trained efficiently through sampling stored state transitions. However, care is required when using ERMs for multi-agent deep reinforcement learning (MA-DRL), as stored transitions can become outdated bec...
This paper deals with the reality gap from a novel perspective, targeting transferring Deep Reinforcement Learning (DRL) policies learned in simulated environments to the real-world domain for visual control tasks. Instead of adopting the common solutions to the problem by increasing the visual fidelity of synthetic images output from simulators during the training phase, this paper seeks to ta...
Daytime Running Lights (DRL) on motorcycles have been shown to counteract the inherently lower sensory conspicuity of these vehicles and to significantly improve their safety. The advantage of the use of DRL exclusively by motorcycles is presently becoming lost by the increasing use of DRLs on cars. The present experiment aimed at evaluating the effects of car DRLs on motorcycle perception in a...
Deep reinforcement learning (DRL) has shown incredible performance in learning various tasks to the human level. However, unlike human perception, current DRL models connect the entire low-level sensory input to the state-action values rather than exploiting the relationship between and among entities that constitute the sensory input. Because of this difference, DRL needs vast amount of experi...
The focus of this work is to enumerate the various approaches and algorithms that center around application of reinforcement learning in robotic manipulation tasks. Earlier methods utilized specialized policy representations and human demonstrations to constrict the policy. Such methods worked well with continuous state and policy space of robots but failed to come up with generalized policies....
Numerous studies have shown that ingrowing olfactory axons exert powerful inductive influences on olfactory map development. From an overexpression screen, we have identified wnt5 as a potent organizer of the olfactory map in Drosophila melanogaster. Loss of wnt5 resulted in severe derangement of the glomerular pattern, whereas overexpression of wnt5 resulted in the formation of ectopic midline...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید