Coverart for item
The Resource Deep reinforcement learning for wireless networks, F. Richard Yu, Ying He, (electronic resource)

Deep reinforcement learning for wireless networks, F. Richard Yu, Ying He, (electronic resource)

Label
Deep reinforcement learning for wireless networks
Title
Deep reinforcement learning for wireless networks
Statement of responsibility
F. Richard Yu, Ying He
Creator
Contributor
Subject
Language
eng
Member of
http://library.link/vocab/creatorName
Yu, F. Richard
Dewey number
006.3/1
Index
no index present
Literary form
non fiction
Nature of contents
dictionaries
http://library.link/vocab/relatedWorkOrContributorName
He, Ying
Series statement
SpringerBriefs in Electrical and Computer Engineering
http://library.link/vocab/subjectName
  • Reinforcement learning
  • Wireless communication systems
Label
Deep reinforcement learning for wireless networks, F. Richard Yu, Ying He, (electronic resource)
Instantiates
Publication
Note
Description based upon print version of record
Antecedent source
file reproduced from an electronic resource
Contents
  • Intro; Preface; A Brief Journey Through ̀̀Deep Reinforcement Learning for Wireless Networks''; Contents; 1 Introduction to Machine Learning; 1.1 Supervised Learning; 1.1.1 k-Nearest Neighbor (k-NN); 1.1.2 Decision Tree (DT); 1.1.3 Random Forest; 1.1.4 Neural Network (NN); Random NN; Deep NN; Convolutional NN; Recurrent NN; 1.1.5 Support Vector Machine (SVM); 1.1.6 Bayes' Theory; 1.1.7 Hidden Markov Models (HMM); 1.2 Unsupervised Learning; 1.2.1 k-Means; 1.2.2 Self-Organizing Map (SOM); 1.3 Semi-supervised Learning; References; 2 Reinforcement Learning and Deep Reinforcement Learning
  • 2.1 Reinforcement Learning2.2 Deep Q-Learning; 2.3 Beyond Deep Q-Learning; 2.3.1 Double DQN; 2.3.2 Dueling DQN; References; 3 Deep Reinforcement Learning for Interference Alignment Wireless Networks; 3.1 Introduction; 3.2 System Model; 3.2.1 Interference Alignment; 3.2.2 Cache-Equipped Transmitters; 3.3 Problem Formulation; 3.3.1 Time-Varying IA-Based Channels; 3.3.2 Formulation of the Network's Optimization Problem; System State; System Action; Reward Function; 3.4 Simulation Results and Discussions; 3.4.1 TensorFlow; 3.4.2 Simulation Settings; 3.4.3 Simulation Results and Discussions
  • 3.5 Conclusions and Future WorkReferences; 4 Deep Reinforcement Learning for Mobile Social Networks; 4.1 Introduction; 4.1.1 Related Works; 4.1.2 Contributions; 4.2 System Model; 4.2.1 System Description; 4.2.2 Network Model; 4.2.3 Communication Model; 4.2.4 Cache Model; 4.2.5 Computing Model; 4.3 Social Trust Scheme with Uncertain Reasoning; 4.3.1 Trust Evaluation from Direct Observations; 4.3.2 Trust Evaluation from Indirect Observations; Belief Function; Dempster's Rule of Combining Belief Functions; 4.4 Problem Formulation; 4.4.1 System State; 4.4.2 System Action; 4.4.3 Reward Function
  • 4.5 Simulation Results and Discussions4.5.1 Simulation Settings; 4.5.2 Simulation Results; 4.6 Conclusions and Future Work; References
Control code
on1083463760
Dimensions
unknown
Extent
1 online resource (78 p.)
File format
one file format
Form of item
online
Isbn
9783030105464
Level of compression
unknown
Note
SpringerLink
Quality assurance targets
unknown
Reformatting quality
unknown
Specific material designation
remote
System control number
(OCoLC)1083463760
Label
Deep reinforcement learning for wireless networks, F. Richard Yu, Ying He, (electronic resource)
Publication
Note
Description based upon print version of record
Antecedent source
file reproduced from an electronic resource
Contents
  • Intro; Preface; A Brief Journey Through ̀̀Deep Reinforcement Learning for Wireless Networks''; Contents; 1 Introduction to Machine Learning; 1.1 Supervised Learning; 1.1.1 k-Nearest Neighbor (k-NN); 1.1.2 Decision Tree (DT); 1.1.3 Random Forest; 1.1.4 Neural Network (NN); Random NN; Deep NN; Convolutional NN; Recurrent NN; 1.1.5 Support Vector Machine (SVM); 1.1.6 Bayes' Theory; 1.1.7 Hidden Markov Models (HMM); 1.2 Unsupervised Learning; 1.2.1 k-Means; 1.2.2 Self-Organizing Map (SOM); 1.3 Semi-supervised Learning; References; 2 Reinforcement Learning and Deep Reinforcement Learning
  • 2.1 Reinforcement Learning2.2 Deep Q-Learning; 2.3 Beyond Deep Q-Learning; 2.3.1 Double DQN; 2.3.2 Dueling DQN; References; 3 Deep Reinforcement Learning for Interference Alignment Wireless Networks; 3.1 Introduction; 3.2 System Model; 3.2.1 Interference Alignment; 3.2.2 Cache-Equipped Transmitters; 3.3 Problem Formulation; 3.3.1 Time-Varying IA-Based Channels; 3.3.2 Formulation of the Network's Optimization Problem; System State; System Action; Reward Function; 3.4 Simulation Results and Discussions; 3.4.1 TensorFlow; 3.4.2 Simulation Settings; 3.4.3 Simulation Results and Discussions
  • 3.5 Conclusions and Future WorkReferences; 4 Deep Reinforcement Learning for Mobile Social Networks; 4.1 Introduction; 4.1.1 Related Works; 4.1.2 Contributions; 4.2 System Model; 4.2.1 System Description; 4.2.2 Network Model; 4.2.3 Communication Model; 4.2.4 Cache Model; 4.2.5 Computing Model; 4.3 Social Trust Scheme with Uncertain Reasoning; 4.3.1 Trust Evaluation from Direct Observations; 4.3.2 Trust Evaluation from Indirect Observations; Belief Function; Dempster's Rule of Combining Belief Functions; 4.4 Problem Formulation; 4.4.1 System State; 4.4.2 System Action; 4.4.3 Reward Function
  • 4.5 Simulation Results and Discussions4.5.1 Simulation Settings; 4.5.2 Simulation Results; 4.6 Conclusions and Future Work; References
Control code
on1083463760
Dimensions
unknown
Extent
1 online resource (78 p.)
File format
one file format
Form of item
online
Isbn
9783030105464
Level of compression
unknown
Note
SpringerLink
Quality assurance targets
unknown
Reformatting quality
unknown
Specific material designation
remote
System control number
(OCoLC)1083463760

Library Locations

    • InternetBorrow it
      Albany, Auckland, 0632, NZ
Processing Feedback ...