FAIRNES AND BIAS IN MACHINE LEARNING MODELS

被引:0
|
作者
Langworthy, Andrew [1 ]
机构
[1] University of East Anglia., United Kingdom
来源
Journal of the Institute of Telecommunications Professionals | 2023年 / 17卷
关键词
Risk assessment;
D O I
暂无
中图分类号
学科分类号
摘要
In recent decades the volume of data generated by businesses and consumers has rocketed, from information on location, buying habits, browsing activity and more. With this data boom comes the opportunity to exploit that data for commercial gain. Machine learning is the way to do this, an actively developing field, with improvements to speed and scalability happening at pace. With these come the risks of biases in the data or the models used to exploit them. As with all advancements, the understanding of these risks is still developing, and care must be taken to both measure and mitigate them. © 2023 Institute of Telecommunications Professionals. All rights reserved.
引用
收藏
页码:29 / 33
相关论文
empty
未找到相关数据