نتایج جستجو برای: drl
تعداد نتایج: 1144 فیلتر نتایج به سال:
In the Drosophila embryo, a subset of muscles require expression and function of the RYK subfamily RTK gene derailed (drl) for correct attachment. We have isolated a second RYK homolog, doughnut (dnt), from Drosophila. The DNT protein exhibits 60% amino acid identity to DRL, and is structurally as similar to the mammalian RYK proteins as is DRL, indicating an ancient duplication event. dnt is e...
In 2015, Google’s Deepmind announced an advancement in creating an autonomous agent based on deep reinforcement learning (DRL) that could beat a professional player in a series of 49 Atari games. However, the current manifestation of DRL is still immature, and has significant drawbacks. One of DRL’s imperfections is its lack of “exploration” during the training process, especially when working ...
Today’s large-scale services generally exploit looselycoupled architectures that restrict functionality requiring tight cooperation (e.g., leader election, synchronization, and reconfiguration) to a small subset of nodes. In contrast, this work presents a way to scalably deploy tightlycoupled distributed systems that require significant coordination among a large number of nodes in the wide are...
Neural function is dependent upon the proper formation and development of synapses. We show here that Wnt5 regulates the growth of the Drosophila neuromuscular junction (NMJ) by signaling through the Derailed receptor. Mutations in both wnt5 and drl result in a significant reduction in the number of synaptic boutons. Cell-type specific rescue experiments show that wnt5 functions in the presynap...
In this paper, we introduce a new set of reinforcement learning (RL) tasks in Minecraft (a flexible 3D world). We then use these tasks to systematically compare and contrast existing deep reinforcement learning (DRL) architectures with our new memory-based DRL architectures. These tasks are designed to emphasize, in a controllable manner, issues that pose challenges for RL methods including par...
Deep Reinforcement Learning (DRL) has had several breakthroughs, from helicopter controlling and Atari games to the Alpha-Go success. Despite their success, DRL still lacks several important features of human intelligence, such as transfer learning, planning and interpretability. We compare two DRL approaches at learning and generalization: Deep Q-Networks and Deep Symbolic Reinforcement Learni...
INTRODUCTION Radiation dose to patients undergoing invasive coronary angiography (ICA) is relatively high. Guidelines suggest that a local benchmark or diagnostic reference level (DRL) be established for these procedures. This study sought to create a DRL for ICA procedures in Queensland public hospitals. METHODS Data were collected for all Cardiac Catheter Laboratories in Queensland public h...
Isoprenoids are a large family of compounds with essential functions in all domains of life. Most eubacteria synthesize their isoprenoids using the methylerythritol 4-phosphate (MEP) pathway, whereas a minority uses the unrelated mevalonate pathway and only a few have both. Interestingly, Brucella abortus and some other bacteria that only use the MEP pathway lack deoxyxylulose 5-phosphate (DXP)...
This paper investigates the use of deep reinforcement learning (DRL) in the design of a “universal” MAC protocol referred to as Deep-reinforcement Learning Multiple Access (DLMA). The design framework is partially inspired by the vision of DARPA SC2, a 3-year competition whereby competitors are to come up with a clean-slate design that “best share spectrum with any network(s), in any environmen...
During development, dendrites migrate to their correct locations in response to environmental cues. The mechanisms of dendritic guidance are poorly understood. Recent work has shown that the Drosophila olfactory map is initially formed by the spatial segregation of the projection neuron (PN) dendrites in the developing antennal lobe (AL). We report here that between 16 and 30 h after puparium f...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید