Gelişmiş Arama

Basit öğe kaydını göster

dc.contributor.authorOh, Seongmun
dc.contributor.authorJung, Jaesung
dc.contributor.authorOnen, Ahmet
dc.contributor.authorLee, Chul-Ho
dc.date.accessioned2023-02-22T12:44:38Z
dc.date.available2023-02-22T12:44:38Z
dc.date.issued2022en_US
dc.identifier.issn2296-598X
dc.identifier.otherWOS:000862291500001
dc.identifier.urihttps://doi.org/10.3389/fenrg.2022.957466
dc.identifier.urihttps://hdl.handle.net/20.500.12573/1452
dc.description.abstractThe demand response (DR) program is a promising way to increase the ability to balance both supply and demand, optimizing the economic efficiency of the overall system. This study focuses on the DR participation strategy in terms of aggregators who offer appropriate DR programs to customers with flexible loads. DR aggregators engage in the electricity market according to customer behavior and must make decisions that increase the profits of both DR aggregators and customers. Customers use the DR program model, which sends its demand reduction capabilities to a DR aggregator that bids aggregate demand reduction to the electricity market. DR aggregators not only determine the optimal rate of incentives to present to the customers but can also serve customers and formulate an optimal energy storage system (ESS) operation to reduce their demands. This study formalized the problem as a Markov decision process (MDP) and used the reinforcement learning (RL) framework. In the RL framework, the DR aggregator and each customer are allocated to each agent, and the agents interact with the environment and are trained to make an optimal decision. The proposed method was validated using actual industrial and commercial customer demand profiles and market price profiles in South Korea. Simulation results demonstrated that the proposed method could optimize decisions from the perspective of the DR aggregator.en_US
dc.description.sponsorshipNational IT Industry Promotion Agency (NIPA), Republic of Korea 1711151479en_US
dc.language.isoengen_US
dc.publisherFRONTIERS MEDIA SAen_US
dc.relation.isversionof10.3389/fenrg.2022.957466en_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectreinforcement learningen_US
dc.subjectenergy storage systemen_US
dc.subjectdemand responseen_US
dc.subjectaggregatoren_US
dc.subjectelectricity marketen_US
dc.titleA reinforcement learning-based demand response strategy designed from the Aggregator's perspectiveen_US
dc.typearticleen_US
dc.contributor.departmentAGÜ, Mühendislik Fakültesi, Elektrik - Elektronik Mühendisliği Bölümüen_US
dc.contributor.authorID0000-0001-7086-5112en_US
dc.contributor.institutionauthorÖnen, Ahmet
dc.identifier.volume10en_US
dc.identifier.startpage1en_US
dc.identifier.endpage13en_US
dc.relation.journalFRONTIERS IN ENERGY RESEARCHen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster