Exploring Diversity in Neural Architectures for Safety (English)
Free access
- New search for: Filipiuk, Michał
- New search for: Singh, Vasu
- New search for: Filipiuk, Michał
- New search for: Singh, Vasu
- Conference paper / Electronic Resource
-
Title:Exploring Diversity in Neural Architectures for Safety
-
Contributors:Filipiuk, Michał ( author ) / Singh, Vasu ( author )
-
Conference:AISafety ; 2022 ; Wien
-
Published in:
-
Publisher:
- New search for: [RWTH Aachen]
-
Place of publication:[Aachen, Germany]
-
Publication date:2022
-
Type of media:Conference paper
-
Type of material:Electronic Resource
-
Language:English
-
Source:
The tables of contents are generated automatically and are based on the data records of the individual contributions available in the index of the TIB portal. The display of the Tables of Contents may therefore be incomplete.
- 1
-
Revisiting the evaluation of deep neural networks for pedestrian detectionFeifel, Patrick / Franke, Benedikt / Raulf, Arne / Schwenker, Friedhelm / Bonarens, Frank / Köster, Frank et al. | 2022
-
Utilizing Class Separation Distance for the Evaluation of Corruption Robustness of Machine Learning ClassifiersSiedel, Georg / Vock, Silvia / Morozov, Andrey / Voß, Stefan et al. | 2022
-
Feasibility of Inconspicuous GAN-generated Adversarial Patches against Object DetectionPavlitskaya, Svetlana / Codău, Bianca-Marina / Zöllner, J. Marius et al. | 2022
-
Privacy Safe Representation Learning via Frequency Filtering EncoderJeong, Jonghu / Cho, Minyong / Benz, Philipp / Hwang, Jinwoo / Kim, Jeewook / Lee, Seungkwan / Kim, Tae-hoon et al. | 2022
-
Constrained Policy Optimization for Controlled Contextual Bandit ExplorationKachuee, Mohammad / Lee, Sungjin et al. | 2022
-
Accountability and Responsibility of Artificial Intelligence Decision-making Models in Indian Policy LandscapeMalhotra, Palak / Misra, Amita et al. | 2022
-
Leveraging generative models to characterize the failure conditions of image classifiersCoz, Adrien Le / Herbin, Stéphane / Adjed, Faouzi et al. | 2022
-
A Hierarchical HAZOP-Like Safety Analysis for Learning-Enabled SystemsQi, Yi / Conmy, Philippa Ryan / Huang, Wei / Zhao, Xingyu / Huang, Xiaowei et al. | 2022
-
CAISAR: A platform for Characterizing Artificial Intelligence Safety and RobustnessGirard-Satabin, Julien / Alberti, Michele / Bobot, François / Chihani, Zakaria / Lemesle, Augustin et al. | 2022
-
Improvement of Rejection for AI Safety through Loss-Based MonitoringScholz, Daniel / Hauer, Florian / Knobloch, Klaus / Mayr, Christian et al. | 2022
-
Understanding Adversarial Examples Through Deep Neural Network's Classification Boundary and Uncertainty RegionsShu, Juan / Xi, Bowei / Kamhoua, Charles et al. | 2022
-
The impact of averaging logits over probabilities on ensembles of neural networksTassi, Cedrique Rovile Njieutcheu / Gawlikowski, Jakob / Fitri, Auliya Unnisa / Triebel, Rudolph et al. | 2022
-
Exploring Diversity in Neural Architectures for SafetyFilipiuk, Michał / Singh, Vasu et al. | 2022
-
Assessing Demographic Bias Transfer from Dataset to Model: A Case Study in Facial Expression RecognitionDominguez-Catena, Iris / Paternain, Daniel / Galar, Mikel et al. | 2022
-
Benchmarking and deeper analysis of adversarial patch attack on object detectorsLabarbarie, Pol / Tong, Adrien Chan Hon / Herbin, Stéphane / Leyli-Abadi, Milad et al. | 2022
-
Increasingly Autonomous CPS: Taming Emerging Behaviors from an Architectural PerspectiveHugues, Jerome / Cancila, Daniela et al. | 2022
-
Safety-aware Active Learning with Perceptual Ambiguity and Criticality AssessmentRajendran, Prajit T. / Ollier, Guillaume / Espinoza, Huascar / Adedjouma, Morayo / Delaborde, Agnes / Mraidha, Chokri et al. | 2022
-
A causal perspective on AI deception in gamesWard, Francis Rhys / Belardinelli, Francesco / Toni, Francesca et al. | 2022
-
Let it RAIN for Social GoodBrännström, Mattias / Theodorou, Andreas / Dignum, Virginia et al. | 2022