4th Alps-Adriatic Inverse Problems Workshop 2025 (AAIP 2025)
Alpen-Adria-Universität Klagenfurt
Program Overview:
The 4th Alps-Adriatic Inverse Problems Workshop will be held at the Department of Mathematics of the Alpen-Adria-Universitaet Klagenfurt during October 2-3, 2025.
The aim of this workshop is to gather scientists working on the theory and applications of inverse problems in academia and industry, in order to present their research, exchange ideas, and start new collaborations. Scientists at an early stage of the career (PhD students, postdocs) are particularly encouraged to participate.
Participants might also take the opportunity to attend the preceding European Conference on Computational Optimization 2025 (EUCCO 2025) (September 29 - October 1, 2025).
The workshop will be held in person.
The scientific program starts on Thursday at 9:00 and ends on Friday late afternoon. An easy hike is planned for Wednesday afternoon, weather permitting. More information on the hike will be sent out via email in September.
Invited Speakers:
- Tatiana Bubba, University of Ferrara
- Vincent Duval, INRIA
Deadlines:
Registration: August 31, 2025
Abstract submission: August 20, 2025
Acceptance of abstract: August 31, 2025 - The decision on acceptance will be communicated within one month from submission, but no later than August 31st, 2025.
Payment of fee: September 8, 2025
Workshop fees (including conference dinner):
- regular: EUR 240,-
- IPIA member (see https://ipia.site/wp/ ): EUR 200,-
- PhD student: EUR 120,-
- Attendance at AAIP 2025 and EUCCO 2025: EUR 380,-
- Attendance at AAIP 2025 and EUCCO 2025 - PhD student: EUR 180,-
Cancellation policy:
25 % of the registration fee will be refunded in case of cancellation before September 19, 2025. There is no refund for cancellation after this date.
Registration information:
- For the registration, an account is first required to proceed with the registration. Please click on this link for creating the Indico account.
- In the registration form you are asked to distinguish between an „invoice for private person“ (e.g. you get a refund of the costs of your university or company) or „invoice in case your institution / company pays directly the fee“. For companies within the European Union, a VAT number is also necessary.
- The fee can only be paid via credit card. In case you make use of the combined fee (380,- EUR for regular participant, 180,- for students) please register for both events and after payment of the combined fee we will set your payment status to paid for both events.
- When you register, you will be asked to agree to the privacy policy. You find the privacy policy here.
Local Organizing Committee:
- Barbara Kaltenbacher
- Pascal Lehner
- Diana-Elena Mirciu
- Teresa Rauscher
- Elena Resmerita
- Anita Wachter
- Tobias Wolf
Scientific Committee:
- Kristian Bredies, Graz
- Markus Haltmeier, Innsbruck
- Barbara Kaltenbacher, Klagenfurt
- Elena Resmerita, Klagenfurt
- Ronny Ramlau, Linz
- Otmar Scherzer, Wien
Program (overview):
All talks take place at room HS O.0.01(Stiftungssaal, Servicegebäude, see https://campusplan.aau.at?poi-id=556&floor=0 ) and the breaks are next to the lecture room.
Participants of AAIP 2025:

The AAIP 2025 is supported by:
-
-
Registration O.0.01
O.0.01
-
Opening O.0.01
O.0.01
-
Plenary talk O.0.01
O.0.01
-
1
Leveraging representation principles in inverse problems - the case of curve recovery
In the last few years, following the work of Michaël Unser and collaborators, the inverse problem community has taken a keen interest in representing the solutions of variational problems.
That kind of representation results, which rely on convex analysis, make it possible to derive some theoretical properties of minimizers and to design efficient numerical methods.
In this talk, I will give an overview of their use for the recovery of curves (in multiple senses), and I will describe a new method to compute geodesics in images using a variational approach.This is a joint work with Majid Arthaud and Antonin Chambolle.
Speaker: Vincent Duval (INRIA Paris)
-
1
-
Session 1 O.0.01
O.0.01
-
2
The SCD semismooth* Newton method for the efficient minimization of Tikhonov functionals
We consider the efficient numerical minimization of Tikhonov functionals with nonlinear operators and non-smooth and non-convex penalty terms, which appear e.g. in variational regularization. For this, we consider a new class of SCD semismooth$^*$ Newton methods, which are based on a novel concept of graphical derivatives, and exhibit locally superlinear convergence. We present a detailed description of these methods, and provide explicit algorithms in the case of sparsity ($\ell_p$ , $0\leq p < \infty$) and TV penalty terms. The numerical performance of these methods is then illustrated on a number of tomographic imaging problems.
Speaker: Dr Simon Hubmer (Johannes Kepler University Linz, Austria) -
3
On the Intensity-based Inversion Method for Quantitative Optical Coherence Elastography
Elastography, as an imaging modality in general, aims at mapping the mechanical properties of a given material sample. For estimating the values of stiffness and strain quantitatively, we look at Elastography from the perspective of Inverse Problems. In particular, we start with theoretical ideas on how to perform Elastography and continue all the way to implementing Optical Coherence Elastography (OCE) as an imaging modality in practice. Furthermore, we discuss inversion methods for estimating strain and stiffness from optical coherence imaging data, and validate the reconstruction results against the ground truth for 12 silicone elastomer phantoms. In addition, we present a convergence analysis of the intensity-based inversion method for evaluating material parameters.
Speaker: Dr Ekaterina Sherina (University of Vienna)
-
2
-
10:40 AM
Coffee break O.0.01
O.0.01
-
Session 2 O.0.01
O.0.01
-
4
Weighting operators for sparsity regularization
Standard regularization methods for inverse problems with non-injective forward operators typically introduce bias by favoring components orthogonal to the null space of the forward map. This often poses challenges in source recovery tasks, particularly in inverse source problems. To address this, we generalize previous work that mitigated the issue using the pseudo-inverse $A^\dagger$, by now introducing a broader class of weighting schemes defined via operators of the form $C = BA$, where $B$ is any linear operator.
We provide recovery guarantees for both single and multiple sources under various conditions, including nearly parallel source images. Applications to elliptic PDE source identification using boundary data demonstrate the effectiveness of this approach across different choices of $B$, such as the identity, truncated pseudo-inverse, and random matrices, as well as a problem-dependent yet simply constructed matrix, which might improve recovery in problems involving multiple sources and sinks.
This work suggests that effective solutions to certain inverse problems can be achieved through this alternative weighting approach, highlighting the importance of selecting $B$ in a problem-dependent manner and motivating future research on its optimal choice.Speaker: Niranjana Sudheer (Norwegian University of Life Sciences, NMBU, Oslo, Norway) -
5
A Probabilistic Approach to Inverse Problems
In many application areas (mechanics, geophysics, image processing,...), a large number of real-life problems can be cast as inverse problems. This is the case in the field of energy, where key problems related to engineering, maintenance and management of power systems like, for example, the non-destructive evaluation of some components of nuclear power plants, are inverse problems. Solving such problems is hence of great interest in industry and a lot of work has been devoted to the development of a wide set of approaches and techniques.
We propose a new approach consisting in formulating a linear inverse problem as the maximization of the probability to satisfy a set of two-sided (bilateral) random inequalities. A key property needed for solving such problems concerns the concavity of the probability function. Even restricting to the case (considered here) of coefficient vectors with nondegenerate multivariate Gaussian distributions, only sufficient conditions for characterizing concavity of the joint probability function have been reported so far in the literature. We present new results stating necessary and sufficient conditions for concavity, both for the case of a single-sided (unilateral) and two-sided (bilateral) probabilistic inequalities. These results open the way to efficient resolution of linear inverse problems using the proposed probabilistic approach. As a typical illustration, we discuss a series of computational results on random Gaussian linear systems featuring some typical characteristics of inverse problems like ill-conditioning.
Speaker: Riadh ZORGATI (EDF Lab Paris Saclay) -
6
Inverse problems on hermitian operators in quantum tomography
Recent advancements in photon induced near-field electron microscopy (PINEM) enable the preparation, coherent manipulation and characterization of free-electron quantum states. The available measurement consists of electron energy spectrograms and the goal is to reconstruct a density matrix which represents the quantum state. This requires the solution of an ill-posed inverse problem, where a positive semi-definite trace-class operator is reconstructed given its diagonal in different bases. We
regularize by using the quantum relative entropy as a penalty term which allows us to transfer ideas from maximum entropy regularization to operator spaces. Additionally we investigate numerical methods for the solution of inverse problems posed on positive semi-definite hermitian operators.Speaker: Florian Oberender (Georg-August Universität Göttingen, Institut für Numerische und Angewandte Mathematik)
-
4
-
12:15 PM
Lunch
-
Session 3 O.0.01
O.0.01
-
7
The L2-Optimal Discretization of Tomographic Projection Operators
Tomographic inverse problems remain a cornerstone of medical investigations, allowing the visualization of patients' interior features. While the infinite-dimensional operators modeling the measurement process (e.g., the Radon transform) are well understood, in practice one can only observe finitely many measurements and employ finitely many computations in reconstruction. Thus, proper discretization of these operators is crucial. Different discretization approaches show distinct strengths regarding the approximation quality of the forward- or backward projections. Hence, it is common to employ distinct discretization frameworks for the two said operators, creating a non-adjoint pair of operators. Using such unmatched projection pairs in iterative methods can be problematic, as theoretical convergence guarantees of many iterative methods are based on matched operators. We present a novel theoretical framework for designing an $L^2$-optimal discretization of the forward projection. Curiously, the adjoint of said optimal discretization is the optimal discretization for the backprojection, yielding a matched discretization framework for which both the forward and backward discretization (being the optimal choices) converge, thus eliminating the need for unmatched operator pairs. In the parallel beam case, this optimal discretization is the well-known strip model for discretization, while in the fanbeam case, a novel weighted strip model is optimal.
Speaker: Dr Richard Huber (IDea Lab, University of Graz, Austria) -
8
Full-field Photoacoustic Tomography with Variable Sound Speed and Attenuation
In the standard photoacoustic tomography (PAT) measurement setup, the data used consist of time-dependent signals
measured on an observation surface. In contrast, the measurement data of the recently invented full-field detection
technique provides the solution of the wave equation in the spatial domain at a single point in time. While reconstruction
using classical PAT data has been extensively studied, not much is known about the full-field PAT problem. In this work,
we study full-field photoacoustic tomography with spatially variable sound velocity and spatially variable attenuation.
In particular, we reconstruct the initial pressures $p|_{t=0}$ and $p_t|_{t=0}$ from 2D projections of the full 3D acoustic
pressure distribution at a given time.Speaker: Dr Richard Kowar (University of Innsbruck) -
9
Exact Parameter Identification in PET Pharmacokinetic Modeling
Quantitative dynamic positron emission tomography (PET) attempts to reconstruct kinetic tissue parameters based on a time series of images showing the concentration of the PET tracer over time. The underlying problem is a nonlinear parameter identification problem which is based on an estimation of the arterial input function obtained by costly and time-consuming blood sample analysis. Our work analyzes the identifiability of kinetic tissue parameters based on image data alone. We show in an analytic identifiability result that for the commonly used (ir)reversible two-tissue compartment model, and under reasonable assumptions, it is possible to uniquely identify these parameters in the idealized noiseless scenario without the need for additional concentration measurements from blood samples. Numerical experiments with a regularization approach are also performed to support the analytical result in an application example.
References:
[1] Martin Holler, Erion Morina, and Georg Schramm, Exact parameter identification in PET pharmacokinetic modeling using the irreversible two tissue compartment model, Physics in Medicine & Biology, 69 (2024), p. 165008, https://doi.org/10.1088/1361-6560/ad539e.[2] Martin Holler and Erion Morina. Exact Parameter Identification in PET Pharmacokinetic Modeling: Extension to the Reversible Two Tissue Compartment Model, ArXiv preprint arXiv:2504.09959, 2025.
Speaker: Erion Morina (University of Graz)
-
7
-
3:15 PM
Coffee break O.0.01
O.0.01
-
Session 4 O.0.01
O.0.01
-
10
Sound Speed and Layer Adapted Focusing Methods in Medical Ultrasound
Focused ultrasound is a widely used non-invasive diagnostic and therapeutic tool in modern medicine. A crucial assumption in of its all applications is a constant sound speed in the observed medium. Non-constant sound speeds lead to actual times of flight of the ultrasound waves through the medium differing from calculated times of flight, which are accounted for in focusing algorithms. This leads to an aberrated focus, blurring ultrasound images. As real-time ultrasound imaging is computationally expensive, a fast aberration correction method is needed. In this talk, we present adapted ultrasound focusing algorithms based on geometrical acoustics that make a step into this direction. In a known layered medium setting, it is possible to calculate the correct times of flight. The resulting adapted focusing algorithms correct for the aberrations caused by the different sound speeds in the medium layers. Numerical simulations to determine the precision of our methods are conducted. And finally, the improvements obtained by our Methods in reconstructing Ultrasound Images are demonstrated.
Speaker: Simon Hackl (JKU Linz) -
11
Layer Adapted Time of Flight Calculation using Interpolation for Medical Ultrasound Imaging
Medical ultrasound images are frequently reconstructed using simplifying assumptions regarding acoustic wave propagation. A prevalent assumption is that sound speed is uniform across the imaging medium. However, different tissue types posess varying sound speeds, which leads to image distortions and defocusing. This talk introduces a precise and computationally efficient method for ultrasonic ray tracing in layered media. We present a geometrical acoustics based algorithm that corrects aberrations in layered media using a modified time-of-flight (ToF) calculation additionally considering sound speed variations and the resulting refraction effects. The focusing delays, required for the calculation of the ultimate image, are corrected using accurate ToF results obtained from a nonlinear system of the equations, which was derived using geometrical acoustics. Our interpolation method extends traditional bilinear interpolation by using annular sector area ratios to establish generalized barycentric coordinates, facilitating effective interpolation over smoothly curved geometries. When compared to ground truth time-of-flight values, our method consistently achieves errors small enough to be negligible when applied in image reconstruction. This result demonstrates that our method makes a step towards improved real-time aberration correction in ultrasound imaging.
Speaker: yıldız Oruklu (Ricam) -
12
Off-axis PSF reconstruction for first light instruments on the ELT
In the upcoming generation of Extremely Large Telescopes (ELT), with mirror diameters of up to 40 m, the impact of the turbulent atmosphere is corrected by Adaptive Optics (AO) systems, such as Single Conjugate Adaptive Optics (SCAO) within the Multi-AO Imaging Camera for Deep Observations (MICADO) of ESO's ELT. However, the quality of astronomical images still is degraded due to the time delay stemming from the wavefront sensor (WFS) integration time and adjustment of the deformable mirror(s) (DM). This results in a blur which can be mathematically described by a convolution of the original image with the point spread function (PSF) of the instrument, telescope and residual atmospheric perturbations. The PSF of an astronomical image varies with the position in the observed field, which is a crucial aspect in observations on ELTs.
We adapted the existing techniques to reconstruct the PSF from telemetry data and few atmospheric parameters only to make them feasible for the needs of MICADO and MORFEO. In particular, we use an approach for atmospheric tomography with a time series of AO telemetry data in SCAO mode. Additionally, with slight modifications, our method is feasible also for the Multi Conjugate Adatpive Opcits (MCAO) mode. As input our algorithm requires knowledge of the strength of the different turbulent atmospheric layers, their wind speeds and directions in order to perform the tomography step. To obtain the respective contribution to the PSF, we project the reconstructed layers in the direction of interest.
Our results are obtained for a simulated ELT setting two different end-to-end simulation tools as well as for on-sky data from ERIS@VLT. The reconstructed PSFs are accurate within approx 10% in the standard metrics and the reconstruction is stable for a variety of atmospheric and system parameters.
We also discuss the implementation strategy in order to have a computational and memory efficient software.
Speaker: Dr Roland Wagner (RICAM) -
13
Singular Value-based Atmospheric Tomography with Fourier Domain Regularization (SAFER)
Atmospheric tomography, the problem of reconstructing atmospheric turbulence profiles from wavefront sensor measurements, is an integral part of
many adaptive optics systems. It is used to enhance the image quality of
ground-based telescopes, such as for the Multiconjugate Adaptive Optics Relay For ELT Observations (MORFEO) instrument on the Extremely Large
Telescope (ELT).
Singular-value and frame decompositions of the underlying atmospheric tomography
operator can reveal useful analytical information on this inverse problem, as well as
serve as the basis of efficient numerical reconstruction algorithms. In this talk, we
extend existing singular value decompositions to more realistic Sobolev settings including
weighted inner products, discuss a frame-based
(approximate) solution operator and focus on the numerical implementation of the SVD-based Atmospheric Tomography with Fourier
Domain Regularization Algorithm (SAFER) and its performance for Multi Conjugate Adaptive Optics (MCAO) systems. The key features of the SAFER
algorithm are the utilization of the FFT and the pre-computation of computationally demanding parts. Together this provides a fast algorithm with
less memory requirements than commonly used Matrix Vector Multiplication
(MVM) approaches. We evaluate the performance of SAFER regarding reconstruction quality and computational expense in numerical experiments using
the Adaptive Optics simulation environment COMPASS.Speaker: Lukas Weissinger (RICAM - Johann Radon Institute for Computational and Applied Mathematics)
-
10
-
7:00 PM
Conference dinner (Restaurant Magdas)
-
-
-
Plenary talk O.0.01
O.0.01
-
14
Deeply learned regularization for sparse data tomography
Sparse data tomography is a challenging testing ground for several theoretical and numerical studies, for which both variational regularization and data-driven techniques have been investigated. In this talk, I will present hybrid reconstruction frameworks that combine model-based regularization with data-driven approaches by relying on the interplay between sparse regularization theory, applied harmonic analysis and some microlocal tools. The underlying idea is to only learn that part of the reconstruction that provably can not be handled by model-based methods, thus limiting the role of deep learning to allow more interpretable solutions. Numerical results on both simulated and measured data will show the effectiveness of the proposed approach.
Speaker: Tatiana Bubba (University of Ferrara)
-
14
-
Session 5 O.0.01
O.0.01
-
15
Position-Blind Ptychography: Viability of image reconstruction via data-driven variational inference
Ptychography is a type of coherent diffraction imaging which uses a strongly coherent X-ray source from a synchrotron to reconstruct high resolution images. By shifting the illumination source, it exploits redundancy of multiple diffraction patterns to robustly solve the related phase retrieval problem. The shift parameters are often subject to uncertainty and introduce additional ill-posedness in the reconstruction task. Motivated by applications in single particle imaging, we consider the extreme task of entirely unknown shifts, which have to be recovered jointly with the image. The resulting blind inverse problem requires careful regularization. We explore the ability of learned priors, encoded by a score-based diffusion model, to jointly solve the reconstruction problem and recover the scan positions. In particular, we discuss a Bayesian approach, solving the above task within a variational inference framework.
Speaker: Tim Roith (Helmholtz Imaging, Deutsches Elektronen-Synchrotron DESY) -
16
A Lifted Bregman Strategy for Training Unfolded Proximal Neural Networks
Unfolded proximal neural networks (PNNs) form a family of methods that combines deep learning and proximal optimization approaches. They consist in designing a task specific neural network by unrolling a proximal algorithm for a fixed number of iterations, where linear operators can be learned from prior training procedure. PNNs have shown to be more robust than traditional deep learning approaches while reaching at least as good performances, in particular in computational imaging. However, training PNNs still depends on the efficiency of available training algorithms. In this work, we propose a lifted training formulation based on Bregman distances for unfolded PNNs. Leveraging deterministic and stochastic block-coordinate forward-backward methods, we design computational strategies beyond traditional back-propagation methods for solving the learning problem efficiently. We assess the behaviour of the proposed training approach through numerical simulations on image denoising problems, where the structure of the denoising PNN is based on dual proximal gradient iterations.
Speaker: Xiaoyu Wang
-
15
-
10:40 AM
Coffee break O.0.01
O.0.01
-
Session 6 O.0.01
O.0.01
-
17
On Parameter Identification with Physics-Informed Neural Networks in Three-Dimensional Elasticity
Physics-informed neural networks (PINNs) have emerged as a powerful tool in the scientific machine learning community, with applications to both forward and inverse problems. While they have shown considerable empirical success, significant challenges remain—particularly regarding training stability and the lack of rigorous theoretical guarantees, especially when compared to classical mesh-based methods.
In this talk, we focus on the inverse problem of identifying a spatially varying parameter in a constitutive model of three-dimensional elasticity, using measurements of the system’s state. This setting is especially relevant for non-invasive diagnosis in cardiac biomechanics, where one must also carefully account for the type of boundary data available.
To address this inverse problem, we adopt an all-at-once optimisation framework, simultaneously estimating the state and parameter through a least-squares loss that encodes both available data and the governing physics. For this formulation, we prove stability estimates ensuring that our approach yields a stable approximation of the underlying ground-truth parameter of the physical system independent of a specific discretisation. We then proceed with a PINN-based discretisation and compare it to traditional mesh-based approaches. Our theoretical findings are complemented by illustrative numerical examples.
Speaker: Matthias Höfler (University of Graz) -
18
Nonlinear estimator for Fourier-type wavefront sensing in adaptive optics
Advanced adaptive optics (AO) instruments use Fourier-type wavefront sensors (WFSs) to measure and correct wavefront distortions caused by the Earth's atmosphere. Conventionally, the wavefront reconstruction relies on matrix-vector-multiplications (MVMs). However, these linear estimators assume small wavefront aberrations and may fail to capture the nonlinear behavior of Fourier-type wavefront sensing under high turbulence or low signal-to-noise ratio (SNR) conditions.
In this work, we present a generalized mathematical framework for nonlinear wavefront estimation with Fourier-type WFSs. By leveraging advanced mathematical techniques, including regularization and optimization supported by AI, the proposed estimator accounts for the inherent nonlinearities in the sensor response, enabling robust performance across a wide range of observing conditions.
The new solver is validated through numerical end-to-end simulations. Its significant improvements in wavefront reconstruction accuracy compared to traditional linear approaches, like MVMs, are demonstrated. The generalized framework is adaptable to various Fourier-type sensors and can be integrated into existing AO systems, paving the way for enhanced performance in next-generation telescopes, such as the Extremely Large Telescope (ELT), or laser communication.
Speaker: Victoria Laidlaw (JKU Linz Austria) -
19
Stability and convergence for stochastic gradient descent with decaying Tikhonov regularization
In this talk we study the minimization of convex, $L$-smooth functions defined on a separable real Hilbert space. We analyze regularized stochastic gradient descent (reg-SGD), a variant of stochastic gradient descent that uses a Tikhonov regularization with time-dependent, vanishing regularization parameter. We prove strong convergence of reg-SGD to the minimum-norm solution of the original problem without additional boundedness assumptions. Moreover, we quantify the rate of convergence and optimize the interplay between step-sizes and regularization decay. Our analysis reveals how vanishing Tikhonov regularization controls the flow of SGD and yields stable learning dynamics, offering new insights into the design of iterative algorithms for convex problems, including those that arise in ill-posed inverse problems. We validate our theoretical findings through numerical experiments on image reconstruction and ODE-based inverse problems.
This is joint work with Sebastian Kassing and Leif Döring.
Speaker: Simon Weissmann (Universität Mannheim)
-
17
-
12:15 PM
Lunch
-
Session 7 O.0.01
O.0.01
-
20
A convex lifting approach for the Calderón problem
The Calderón problem consists in recovering an unknown coefficient of a partial differential equation from boundary measurements of its solution. These measurements give rise to a highly nonlinear forward
operator. As a consequence, the development of reconstruction methods for this inverse problem is challenging, as they usually suffer from the problem of local convergence. To circumvent this issue, we propose an alternative approach based on lifting and convex relaxation techniques, that have been successfully developed for solving finite-dimensional quadratic inverse problems. This leads to a convex optimization problem whose solution coincides with the sought-after coefficient, provided that a non-degenerate source condition holds. We demonstrate the validity of our approach on a toy model where the solution of the partial differential equation is known everywhere in the domain. In this simplified setting, we verify that the non-degenerate source condition holds under certain assumptions on the unknown coefficient. We leave the investigation of its validity in the Calderón setting for future works.Speaker: Simone Sanna (Università di Genova) -
21
Reconstruction from Local Fractional Data and Applications to Inverse Problems
We introduce two reconstruction schemes that enable the recovery of a function in the entire Euclidean space $\mathbb{R}^n$ from local data $(u|_W, (-\Delta)^s u|_W)$, where $W$ is an arbitrarily small nonempty open set. These procedures rely crucially on the weak UCP for the fractional Laplacian.
We apply these schemes to two distinct inverse problems. Following the seminal work from Ghosh et al., the first concerns the recovery of a potential (Calderón-type problem) from the fractional Schrödinger equation under nonlocal Robin-type exterior conditions. The second involves recovering the solution to the space-fractional heat equation in $\mathbb{R}^n$ from localized time-dependent measurements within a ball.
To tackle these problems, we construct new analytical tools such as a generalized Kelvin transform and a fractional Robin-to-Robin map. Finally, we provide numerical simulations for both reconstruction methods, illustrating the stability issues and the severe ill-posedness inherent to such inverse problems.
Speaker: Ethan Rinaldo (Université des Antilles) -
22
Toward Practical Implementation of Inverse Problem Theory: Applications to Structural Health Monitoring and Automotive Systems
In inverse problem research, both the mathematical foundations and the development of implementation technologies for industrial applications play essential roles. This presentation highlights two specific applications among several industrial cases, including inspection of wind turbine blades, quality control of smartphone cameras, maintenance of marine structures, and state estimation in automotive development.
The first case focuses on the identification of corrosion rates in marine structures based on electric potential measurements. Here, easily obtainable underwater electric potential data are used as observations. The inverse problem is formulated as a boundary condition identification problem by applying finite element method (FEM)-based electric field analysis and Bayesian inference.
The second case addresses the estimation of pressure distribution on an automotive brake disc. Inspired by the sensory organs of crabs, a compliant deformation amplification mechanism was developed. Using this mechanism, minute deformations of the brake disc were measured and the pressure distribution was estimated through inverse analysis.
These methods were implemented in industrial settings, and their effectiveness was validated. This presentation demonstrates how inverse problem theory can be translated into practical technologies, emphasizing the importance of bridging theory and application in industrial contexts.Speaker: Kenji Amaya (Institute of Science Tokyo) -
23
Bayesian Estimation of Psychological Parameters in Purchasing Behavior Based on Prospect Theory Using Purchase Data
This study focuses on estimating psychological parameters that influence decision-making, based on virtually collected consumer purchase data, and constructs a consumer purchasing behavior model grounded in prospect theory. A virtual shopping environment was developed in which consumers were presented with various combinations of the same product differing in expiration dates and prices, allowing for the collection of diverse purchasing data. The utility function incorporates key behavioral components, including price sensitivity, expiration-date sensitivity, gain/loss asymmetry, and internal reference price. Using consumer choice data, parameter estimation was conducted within a Bayesian framework using the Markov Chain Monte Carlo (MCMC) method. The results confirmed key behavioral characteristics predicted by prospect theory, particularly loss aversion and heterogeneity in reference prices, thereby demonstrating the psychological validity of the proposed model. These findings provide a foundation for integrating behavioral preferences into demand forecasting and pricing strategies.
Speaker: Mr Tasuke Amaya (Waseda university)
-
20
-