100
Conference Abstracts

Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

Conference Abstracts

Page 2: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

IndexAbderraman Marrero, Jesus, 72Absil, Pierre-Antoine, 49, 78Acar, Evrim, 40Agullo, Emmanuel, 39, 51Ahuja, Kapil, 41, 45Aidoo, Anthony, 81Aishima, Kensuke, 50Akian, Marianne, 53Al-Ammari, Maha, 79Al-Mohy, Awad, 96Alastair Spence, Alastair, 56Albera, Laurent, 86Ali, Murtaza, 37Allamigeon, Xavier, 59Alter, O., 73Amat, Sergio, 67, 76, 96Amestoy, Patrick, 65Ammar, Amine, 26Amodei, Luca, 69Amsallem, David, 15Andrianov, Alexander, 91Antoine, Xavier, 24Antoulas, A.C., 43Araujo, Claudia Mendes, 95Arbenz, Peter, 86Arioli, Mario, 36, 45, 58Armentia, Gorka, 84Ashcraft, Cleve, 65Asif, M. Salman, 53Aurentz, Jared L., 64Avron, Haim, 21Axelsson, Owe, 17

Baboulin, Marc, 17, 21Bai, Zhong-Zhi, 25, 86, 92Bailey, David H., 65Baker, Christopher G., 77Ballard, Grey, 27, 34Baragana, Itziar, 74Barioli, Francesco, 32Barlow, Jesse, 37Barreras, Alvaro, 14Barrett, Wayne, 32Barrio, Roberto, 65, 66, 71Basermann, Achim, 75Batselier, Kim, 71, 80Baum, Ann-Kristin, 82Baykara, N. A., 97Beattie, Christopher, 18, 43, 46Bebiano, Natalia, 18Becker, Dulceneia, 17, 21Becker, Stephen, 53

Beckermann, Bernhard, 18Beitia, M. Asuncion, 74Belhaj, Skander, 89Ben-Ari, Iddo, 24Bendito, Enrique, 19, 66Benner, Peter, 22, 41, 42, 76, 81Benning, Martin, 59Bergamaschi, Luca, 58Bergqvist, Goran, 87Berman, Abraham, 33Bernasconi, Michele, 81Beyer, Lucas, 37Bientinesi, Paolo, 37Bing, Zheng, 82Bini, Dario A., 12Bioucas-Dias, Jose, 55Birken, Philipp, 92Blank, Luise, 32Boiteau, Olivier, 65Boito, Paola, 63Bolten, Matthias, 14Boman, Erik G., 39Borges, Anabela, 83Borm, Steffen, 56Borobia, Alberto, 80Borwein, Jonathan M., 65Bouhamidi, Abderrahman, 28, 79Boumal, Nicolas, 78Boutsidis, Christos, 18Boyanova, Petia, 17Bozkurt, Durmus, 71, 72Bozovic, Nemanja, 44Bras, Isabel, 81Braman, Karen, 72Brannick, James, 30Brauchart, Johann S., 69Breiten, Tobias, 22Bro, Rasmus, 40Bru, Rafael, 52Brualdi, Richard A., 26Buchot, Jean-Marie, 69Budko, Neil, 24Bueno, Maria I., 64Buluc, Aydın, 34Busquier, S., 67Butkovic, Peter, 60Buttari, Alfredo, 65Byckling, Mikko, 24Bıyıkoglu, Turker, 66

Cai, Xiao-Chuan, 45Calvin, Christophe, 50

2

Page 3: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

3 2012 SIAM Conference on Applied Linear Algebra

Campos, Carmen, 57Candes, Emmanuel, 53, 59Candela, Vicente F., 67, 68Canning, Andrew, 90Canogar, Roberto, 80Canto, Rafael, 14Carapito, Ana Catarina, 81Carden, Russell L., 46, 60Cardoso, Joao R., 95Carmona, Angeles, 19, 66Carpentieri, Bruno, 94Carriegos, Miguel V., 98Carson, Erin, 34, 34Casadei, Astrid, 74Castro Gonzalez, Nieves, 40Catral, M., 14Cerdan, Juana, 58Chan, Raymond H., 61, 92Chang, Xiao-Wen, 37Chaturantabut, Saifon, 18Chen, Jie, 29Chiang, Chun-Yueh, 47Chinesta, Francisco, 26Chitour, Yacine, 19Chiu, Jiawei, 99Choirat, Christine, 81Chow, Edmond, 35Chu, Delin, 47Chu, Eric King-wah, 43, 44, 94Chu, Moody T., 44Cicone, Antonio, 88Constantinides, George A., 73Cornelio, Anastasia, 35Cravo, Gloria, 87Criado, Regino, 94Cueto, Elias, 26, 26Cvetkovic-Ilic, Dragana S., 41

da Cruz, Henrique F., 80Dachsel, Holger, 48Dag, Hasan, 96Dahl, Geir, 26Damm, Tobias, 69Dandouna, Makarem, 50, 51Dassios, Ioannis K., 78de Hoop, Maarten V., 65de Hoyos, Inmaculada, 74de la Puente, Marıa Jesus, 59De Lathauwer, Lieven, 40de Lima, Teresa Pedroso, 83De Moor, Bart, 71, 80De Sterck, Hans, 10, 40de Sturler, Eric, 16, 18, 41, 45De Teran, Fernando, 64, 64De Vlieger, Jeroen, 61

Deadman, Edvin, 13, 84Delgado, Jorge, 95Delvenne, Jean-Charles, 93Demanet, Laurent, 99Demiralp, Metin, 78, 97, 97Demmel, James W., 27, 28, 34Derevyagin, Maxim, 22Devesa, Antonio, 79Dhillon, Inderjit S., 29Di Benedetto, Maria Domenica, 19Diao, Huaian, 37, 48Djordjevi, Dragan S., 41Dmytryshyn, Andrii, 74, 74Dolean, Victorita, 39Donatelli, Marco, 14, 35Dongarra, Jack, 17, 21, 51Dopico, Froilan M., 42, 42, 54, 64Dreesen, Philippe, 71, 80Drineas, Petros, 12, 18Drmac, Zlatko, 46Druinsky, Alex, 27, 64Drummond, L. A. (Tony), 51, 57Druskin, Vladimir, 18, 22Draganescu, Andrei, 92Du, Xiuhong, 31Duan, Yong, 94Dubois, Jerome, 50Duff, Iain S., 36, 36, 39Duminil, Sebastien, 100Dunlavy, Daniel M., 40

Effenberger, Cedric, 38Eidelman, Yuli, 57, 63Elman, Howard, 56Emad, Nahid, 50, 51Embree, Mark, 23, 60Encinas, Andres M., 19, 66Erhel, Jocelyne, 58Erlacher, Virginie, 33Erway, Jennifer B., 55, 61Ezquerro, J.A., 96Ezzatti, Pablo, 37

Faber, Vance, 73Fabregat-Traver, Diego, 37Fadili, Jalal, 53Fagas, Giorgos, 96Falco, Antonio, 33Falgout, Robert D., 16Fallat, Shaun M., 32, 33Fan, Hung-Yuan, 94Farber, Miriam, 26Farhat, Charbel, 15Faverge, Mathieu, 51Feng, Lihong, 42Fenu, Caterina, 93

Page 4: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 4

Fercoq, Olivier, 94Fernandes, Rosario, 80Ferreira, Carla, 54Ferrer, Josep, 98Ferronato, Massimiliano, 52Fezzani, Riadh, 31Figueiredo, Mario, 55Filar, Jerzy, 24Flaig, Cyril, 86Fornasini, Ettore, 20Forster, Malte, 88Freitag, Melina A., 56, 79Fritzsche, Bernd, 89Fritzsche, David, 86Frommer, Andreas, 86, 91Furtado, Susana, 18Futamura, Yasunori, 50

Galantai, Aurel, 51Gallivan, Kyle, 77Gansterer, Wilfried N., 98Gao,Weiguo, 82Garcıa, Antonio G., 84Garcıa, Esther, 94Garcıa-Planas, M. Isabel, 74, 83, 98Gaspar, Francisco J., 99Gasso, Marıa T., 95Gaubert, Stephane, 53, 59, 94Gaul, Andre, 90Gavalec, Martin, 54, 60, 72Geebelen, Dries, 75Gemignani, Luca, 63Geuzaine, Christophe, 24Ghandchi Tehrani, Maryam, 62Ghysels, Pieter, 34Gil-Farina, Marıa Candelaria, 99Gillis, Nicolas, 55, 82Gillman, Adrianna, 91Gimenez, Isabel, 95Giraldi, Loıc, 34Giraud, Luc, 39, 51Giscard, Pierre-Louis, 93Gohberg, Israel, 63Gokmen, Muhittin, 87Goldberg, Felix, 33Goldfarb, Donald, 59Gonzalez-Concepcion, Concepcion, 99Goubault, Eric, 59Gracia, Juan-Miguel, 54, 84Graf, Manuel, 70Grant, Michael, 53Grasedyck, Lars, 11, 69Grassmann, Winfried, 91Gratton, Serge, 31Greengard, Leslie, 57

Greif, Chen, 20, 20Grigori, Laura, 27, 31, 34, 86Grimes, Roger, 65Grubisic, Luka, 97Gu, Ming, 27, 88Guermouche, Abdou, 39, 51Guerrero, Johana, 61Gugercin, Serkan, 16, 18, 41, 43, 46Guglielmi, Nicola, 15, 60, 62, 88Guivarch, Roman, 39Guo, Chun-Hua, 68Gupta, Anshul, 11Gurbuzbalaban, Mert, 60Gurvit, Ercan, 97Guterman, Alexander, 80Gutierrez, Jose M., 67Gutierrez-Gutierrez, Jess, 23Gutknecht, Martin H., 90Guttel, Stefan, 18, 84Gyamfi, Kwasi Baah, 81

Hached, Mustapha, 28, 79Hall, H. Tracy, 32Han, Lixing, 14Hanke, Martin, 35Hartwig, R., 48Hauret, Patrice, 39Have, Pascal, 30Hayami, Ken, 38Hein, Matthias, 29Heinecke, Alexander, 49Heins, Pia, 59Henson, Van Emden, 16Herault, Thomas, 51Hernandez, Araceli, 47Hernandez, M.A., 96Hernandez-Medina, Miguel Angel, 84Heroux, Michael A., 39Herranz, Victoria, 79Herzog, Roland, 11, 32Hess, Martin, 76Heuveline, Vincent, 36Higham, Des, 10Higham, Nicholas J., 13, 43, 77, 84, 96Hladık, Milan, 88Ho, Kenneth L., 57Hogben, Leslie, 32Hollanders, Romain, 93Holodnak, John T., 45Holtz, Olga, 27Hossain, Mohammad-Sahadet, 81Hsieh, Cho-Jui, 29Hu, Xiaofei, 55Huang, Ting-Zhu, 94Huang, Tsung-Ming, 47

Page 5: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

5 2012 SIAM Conference on Applied Linear Algebra

Huang, Yu-Mei, 86Huckle, Thomas, 14, 52Huhs, Georg, 50Huhtanen, Marko, 24Humet, Matthias, 57Hunter, Jeffrey J., 24Hunutlu, Fatih, 97Hur, Youngmi, 76

Iakovidis, Marios, 96Iannazzo, Bruno, 12, 85Igual, Francisco D., 37Ipsen, Ilse C. F., 17, 45Ishteva, Mariya, 49

Jagels, Carl, 73Jain, Sapna, 78Jaklic, Gasper, 93Jaksch, Dieter, 93Jameson, Antony, 92Janna, Carlo, 52Jarlebring, Elias, 38, 69Jbilou, Khalide, 28, 28, 79Jerez, Juan L., 73Jeuris, Ben, 49Jiang, Hao, 66, 71Jiang, Mi, 90Jimenez, Adrian, 59Jing, Yan-Fei, 94Jiranek, Pavel, 31Johansson, Pedher, 74Johansson, Stefan, 74, 74John Pearson, John, 32Johnson, Charles R., 19Jokar, Sadegh, 43Jonsthovel, Tom, 52Joshi, Shantanu H., 61Jungers, Raphael M., 88, 93

Kabadshow, Ivo, 48Kagstrom, Bo, 74Kahl, Karsten, 30, 88, 91Kambites, Mark, 54Kannan, Ramaseshan, 77Karimi, Sahar, 59Karow, Michael, 54Katz, Ricardo D., 59Kaya, Oguz, 35Kempker, Pia L., 83Kerrigan, Eric C., 73Khabou, Amal, 27Khan, Mohammad Kazim, 35Kılıc, Mustafa, 60Kilmer, Misha E., 10, 18Kirkland, Steve, 25Kirstein, Bernd, 89

Klein, Andre, 96Klymko, Christine, 85Knight, Nicholas, 34, 34Knizhnerman, Leonid, 84Koev, Plamen, 95Kolda, Tamara G., 40Kollmann, Markus, 25Kopal, Jirı, 42Korkmaz, Evrim, 78Koutis, Ioannis, 21Kozubek, Tomas, 74Kraus, Johannes, 16Krendl, Wolfgang, 25Kressner, Daniel, 44, 60Kronbichler, Martin, 17Krukier, Boris, 99Krukier, Lev, 99Kruschel, Christian, 75Kucera, Radek, 74Kumar, Pawan, 76Kumar, Shiv Datt, 80Kunis, Susanne, 48Kuo, Chun-Hung, 44Kuo, Yueh-Cheng, 47Kurschner, Patrick, 79

L’Excellent, Jean-Yves, 65Laayouni, Lahcen, 92Lacoste, Xavier, 75Landi, Germana, 55Langguth, Johannes, 36Langlois, Philippe, 78Langou, Julien, 51Lantner, Roland, 93Latouche, Guy, 68Lattanzi, Marina, 47Laub, Alan J., 18Leader, Jeffery J., 77Lecomte, Christophe, 62Legrain, Gregory, 34Lelievre, Tony, 33Lemos, Rute, 80Leydold, Josef, 66Li, Qingshen, 90Li, Ren-Cang, 68Li, Shengguo, 88Li, Tiexiang, 43, 44Li, Xiaoye S., 62, 63, 65Liesen, Jorg, 16, 73, 90Lim, Lek-Heng, 45Lim, Yongdo, 49Lin, Lijing, 84Lin, Lin, 85Lin, Matthew M., 44, 47Lin, Wen-Wei, 43, 44, 47

Page 6: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 6

Lin, Yiding, 81Lingsheng, Meng, 82Lippert, Th., 91Lipshitz, Benjamin, 27Lisbona, Francisco J., 99Litvinenko, Alexander, 27Lorenz, Dirk, 53, 75Lu, Linzhang, 89Lubich, Christian, 15Lucas, Bob, 65Luce, Robert, 36Lukarski, Dimitar, 36

Ma, Shiqian, 59Mach, Susann, 32Mackey, D. Steven, 64, 64Maeda, Yasuyuki, 50Magdon-Ismail, Malik, 18Magrenan, A., 67Magret, M. Dolors, 83, 83Mahoney, Michael W., 21, 29Manguoglu, Murat, 30, 96Marchesi, Nichele, 93Marcia, Roummel F., 61Marco, Ana, 76Marek, Ivo, 24Marilena, Mitrouli, 89Marın, Jose, 52, 58Markopoulos, Alexandros, 74Marques, Osni, 57Marquina, Antonio, 61Martin, David, 22Martin-Dorel, Erik, 66Martınez, Angeles, 58Martınez, Jose-Javier, 76Martinsson, Per-Gunnar, 91Mas, Jose, 52, 58Mason, Paolo, 19Masson, Roland, 30Mastronardi, Nicola, 77, 89Matthies, Hermann G., 27Mayo, Jackson R., 40McNally, Arielle Grim, 45Meerbergen, Karl, 21, 38, 56, 61, 87, 97Mehrmann, Volker, 43Melard, Guy, 96Melchior, Samuel, 98Melman, Aaron, 89Melquiond, Guillaume, 66Meng, Xiangrui, 29Mengi, Emre, 60Mertens, Clara, 71Metsch, Bram, 100Meurant, Gerard, 91Michiels, Wim, 21, 38, 62, 69, 97

Miedlar, Agnieszka, 45Mikkelsen, Carl Christian K., 81Miller, Gary, 21Miller, Killian, 40Milzarek, Andre, 53Ming, Li, 45Mingueza, David, 83Miodragovic, Suzana, 97Mishra, Aditya Mani, 85Mishra, Ratnesh Kumar, 80Mitjana, Margarida, 19, 66Miyata, Takafumi, 77Modic, Jolanda, 93Moller, Michael, 59Montoro, M. Eulalia, 83Moreira Cardoso, Domingos, 25Morigi, Serena, 35Morikuni, Keiichi, 38Moro, Julio, 54Mosic, D., 41Moufawad, Sophie, 86Moulding, Erin, 20Muhic, Andrej, 56Muller, Jean-Michel, 66

Nabben, Reinhard, 90Nagy, James G., 35Najm, Habib, 33Nakata, Maho, 50Nakatsukasa, Yuji, 13, 38, 43, 50Napov, Artem, 63Nataf, Frederic, 30, 31, 39Naumovich, Anna, 88Neumann, Michael, 14Newton, Chris, 17Neytcheva, Maya, 17Ng, Esmond, G., 36Ng, Michael K., 55, 86Nguyen, Giang T., 68Niederbrucker, Gerhard, 98Niknejad, Amir, 23Noschese, Silvia, 28Noutsos, Dimitrios, 13Nouy, Anthony, 26, 34

Ogita, Takeshi, 51Oishi, Shin’ichi, 50Okoudjou, Kasso, 70Olshevsky, Vadim, 42Orban, Dominique, 20Osher, Stanley J., 61Overton, Michael L., 60

Padhye, Sahadeo, 85Paige, Chris, 45Pajonk, Oliver, 27

Page 7: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

7 2012 SIAM Conference on Applied Linear Algebra

Pan, Victor, 62PanayiotisPsarrakos, Panayiotis, 10Pandey, S. N., 85Papathanasiou, Nikolaos, 14Parlett, Beresford, 54, 88Patrıcio, Pedro, 48Pauca, Paul, 55Pearson, John, 25Pechstein, Clemens, 39Pedregal, P., 67Pedroche, Francisco, 94Pelaez, Marıa Jose, 54Pelaez, Marıa J., 14Peled, Inon, 27Peng, Richard, 21Pena, Juan Manuel, 12, 14, 95Pena, Marta, 98Perea, Carmen, 79Perez-Villalon, Gerardo, 84Peris, Rosa M., 68Pestana, Jennifer, 20, 20Pestano-Gabino, Celina, 99Petiton, Serge G., 50PETSc Team, 51Pfetsch, Marc, 53Pfluger, Dirk, 49Piccolomini, Elena Loli, 35, 55Pichugina, Olga, 99Pinheiro, Eduardo, 23Pini, Giorgio, 52Pippig, Michael, 48Plavka, Jan, 60Plemmons, Robert J., 14, 55Plestenjak, Bor, 56Polizzi, Eric, 30Poloni, Federico, 46, 84Ponnapalli, S.P., 73Popa, Constantin, 82Popescu, Radu, 39Portal, Alberto, 84Poulson, Jack, 58Prasad, Sudhakar, 55Preclik, Tobias, 82Providencia, J. da, 18Psarrakos, Panayiotis, 14

Quraischi, Sarosh, 43

Rachidi, Mustapha, 72Rajamanickam, Siva, 39Ralha, Rui, 13Ramage, Alison, 17Ramaswami, V., 68Ramet, Pierre, 74, 75Ramlau, Ronny, 35Ran, Andre C.M., 83

Rantzer, Anders, 43Raydan, Marcos, 61Rees, Tyrone, 20Reichel, Lothar, 22, 28, 35, 73Reis, Timo, 46Relton, Samuel, 96Remis, Rob, 18Ren, Zhi-Ru, 92Requena, Veronica, 79Rittich, H., 88, 91Robert, Yves, 51Roca, Alicia, 83Rocha, Paula, 81Rodrigo, Carmen, 99Rodriguez, Giuseppe, 93Rohwedder, Thorsten, 15Roitberg, Inna, 89Rojas, Marielba, 61Roman, Jean, 39, 51Roman, Jose E., 57Romance, Miguel, 94Romberg, Justin, 53Romero Sanchez, Pantaleon David, 67Rosic, Bojana, 27Rottmann, Matthias, 88Rozloznık, Miroslav, 42Rude, Ulrich, 82Ruiz, Daniel, 39Rump, Siegfried M. , 66

Saad, Yousef, 31Saak, Jens, 69Sachs, Ekkehard, 32Sadok, Hassane, 28, 100Saied, Faisal, 30Sakhnovich, Alexander, 89Sakurai, Tetsuya, 50Salam, Ahmed, 72Salinas, Pablo, 99Sameh, Ahmed H., 29, 30Sanders, Geoffrey D., 16Sarbu, Lavinia, 32Sarkis, Marcus, 31Saunders, M. A., 29Saunders, M.A., 73Scarpa, Giannicola, 33Schaerer, Christian E., 31Schaffrin, Burkhard, 75Scheichl, Robert, 39Scheinberg, Katya, 59Schenk, Olaf, 29Schroder, Christian, 56Schroder, Jacob B., 16Schulze Grotthoff, Stefan, 98Schwartz, Oded, 27, 27, 34

Page 8: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 8

Schweitzer, Marcel, 99Sciriha, Irene, 25Scott, Jennifer, 58Sedlacek, Matous, 52Senhadji, Lotfi, 86Sergeev, Sergeı, 54Seri, Raffaello, 81Serra Capizzano, Stefano, 13Serrano, Sergio, 65, 71Severini, Simone, 33Sgallari, Fiorella, 35Shader, Bryan, 32Shah, Mili, 85Shank, Stephen D., 86Shao, Meiyue, 82Sharify, Meisam, 53Shieh, Shih-Feng, 47Shu, Huazhong, 86Sifuentes, Josef, 23Sigalotti, Mario, 19Simic, Slobodan K., 67Simoncini, Valeria, 22, 25, 81Singer, Amit, 78Singh, Jagjit, 78Smith, Barry, 51Smoktunowicz, Alicja, 42Snow, Kyle, 75Soares, Graca, 80Sogabe, Tomohiro, 77Sokolovic, Sonja, 88Solomonik, Edgar, 28Sonneveld, Peter, 31Soodhalter, Kirk M., 41Sootla, Aivar, 43Sorber, Laurent, 40Sorensen, D.C., 46Sou, Kin Cheong, 43Sourour, Ahmed R., 90Spence, Alastair, 56Spillane, Nicole, 39Spinu, Florin, 92Sridharan, Raje, 80Stevanovic, Dragan, 67Stoll, Martin, 32, 32Stovıcek, Pavel, 22Strakos, Zdenek, 16Strakova, Hana, 98Stykel, Tatjana, 46Su, Yangfeng, 73Sun, Hai-Wei, 13Sutton, Brian D., 82Suykens, Johan, 75Szydlarski, Mikolaj, 30Szyld, Daniel B., 20, 31, 41, 86, 92

Tam, Tin-Yau, 71Tang, Ping Tak Peter, 29Tanner, Jared, 10Tarragona, Sonia, 98Taslaman, Leo, 38Tebbens, Jurjen Duintjer, 91Thome, Nestor, 47Thornquist, Heidi, 58Thwaite, Simon, 93Tichy, Petr, 73Tillmann, Andreas, 53Tisseur, Francoise, 12, 38, 64, 77, 79Titley-Peloquin, David, 31Tobler, Christine, 44Toledo, Sivan, 21, 27, 64Tomaskova, Hana, 72Tomov, Stanimire, 17Tonelli, Roberto, 93Torregrosa, Juan R., 95Tran, John, 65Treister, Eran, 15Trenn, Stephan, 20Triantafyllou, Dimitrios, 89Trillo, Juan C., 76Truhar, Ninoslav, 97Tsatsomeros, Michael, 10Tuma, Miroslav, 42, 52Tunc, Birkan, 87Turan, Erhan, 86Turkmen, Ramazan, 94Tyaglov, Mikhail, 19

Ucar, Bora, 36Ulbrich, Michael, 53Ulukok, Zubeyde, 94Urbano, Ana M., 14Urquiza, Fabian, 47Uschmajew, Andre, 15, 49

V. de Hoop, Maarten, 62, 63Valcher, Maria Elena, 20Van Barel, Marc, 40, 57, 71Van Beeumen, Roel, 21, 97van de Geijn, Robert A., 37van den Berg, Ewout, 59van den Driessche, P., 32van der Holst, Hein, 32Van Dooren, Paul, 49, 77, 89, 98van Gijzen, Martin B., 41, 52Van Houdt, Benny, 68Van Loan, Charles F., 73van Schuppen, Jan H., 83Vandebril, Raf, 23, 49, 64, 71, 87Vandereycken, Bart, 49, 49Vandewalle, Joos, 75Vannieuwenhoven, Nick, 87

Page 9: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

9 2012 SIAM Conference on Applied Linear Algebra

Vanroose, Wim, 34Varona, J. L., 67Vassalos, Paris, 13Vassilevski, Panayot S., 16Vavasis, Stephen, 59Velasco, Francisco E., 84Verde-Star, Luis, 71Veselic, Kresimir, 97Voss, Heinrich, 38Vuik, Kees, 52

Wagenbreth, Gene, 65Wakam, Desire Nuentsa, 58Wang, Li, 90Wang, Lu, 86Wang, Shen, 62, 63, 65Wang, Wei-chao, 68Wang, Wei-guo, 68Wang, Xiang, 95Wathen, Andrew J., 20, 20, 32Watkins, David S., 64Weber, Marcus, 23Wei, Yimin, 37, 48Weisbecker, Clement, 65Weiss, Jan-Philipp, 36Weng, Peter Chang-Yi, 43, 94Willbold, Carina, 46Wirth, Fabian, 20Wittum, Gabriel, 30Womersley, Robert S., 69Wu, Chin-Tien, 47Wu, Minghao, 56Wu, Xunxun, 17Wulling, Wolfgang, 45Wyatt, Sarah, 16

Xia, Jianlin, 62, 63, 65Xu, Qingxiang, 48Xu, Shufang, 68Xue, Fei, 41Xue, Jungong, 68, 82

Yamamoto, Kazuma, 50Yamamoto, Yusaku, 72Yamanaka, Naoya, 50Yamazaki, Ichitaro, 50Yang, Chao, 12, 85Yang, X., 92Yavneh, Irad, 15Ye, Qiang, 44Ye, Xuancan, 32Yetkin, E. Fatih, 96Yıldırim, E. Alper, 60Yildiztekin, Kemal, 38Yilmaz, Fatih, 72Yin, Jun-Feng, 90

Yoshimura, Akiyoshi, 85Yuan, Fei, 89Yuan, Xiaoming, 61

Zangre, Ibrahim, 24Zaslavsky, Mikhail, 22Zemach, Ran, 15Zenadi, Mohamed, 39Zhang, Hong, 51Zhang, Jiani, 55Zhang, Qiang, 55Zhang, Shao-Liang, 77Zhang, Wenxing, 61Zhang, Yujie, 73Zhao, Tao, 30Zheng, Fang, 76Zheng, Ning, 90Zhlobich, Pavel, 42Zhou, Xiaoxia, 90Zietak, Krystyna, 13Zikatanov, Ludmil T., 16Zimmermann, Karel, 54Zinemanas, Alejandro, 37Zollner, Melven, 75Zounon, Mawussi, 51Zouros, Grigorios, 24Zuk, Or, 23Zulehner, Walter, 25, 25

Page 10: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 10

IP 1. Challenges in modern medical imagereconstructionA common problem in medical imaging is thereconstruction of one or more anomalous hot spotsdistinct from the background. Many such problems areseverely data limited and highly noise sensitive. Thus, avoxel-based reconstruction could be both overkill and anoverreach. Shape-based methods can be used as analternative; a recent such proposal is to use a parametriclevel set (PaLS) approach, whereby the image is describedusing the level-set of a linear combination of a relativelysmall number of basis functions of compact support.Remaining concerns are the nonlinear least squaresproblem for the image model parameters and, in the caseof a nonlinear forward model, the solution of thelarge-scale linear systems at each step of the optimizationprocess. We report on recent advances on both fronts,notably a trust-region regularized Gauss-Newton processfor the optimization, and a reduced-order modelingapproach tailored to the PaLS formulation.Misha E. KilmerDept. of MathematicsTufts University, [email protected]

IP 2. An envelope for the spectrum of a matrixAn envelope-type region E(A) in the complex plane thatcontains the eigenvalues of a given n× n complex matrixA is introduced. E(A) is the intersection of an infinitenumber of regions defined by cubic curves. The notionand method of construction of E(A) extend those of thenumerical range of A, which is known to be anintersection of an infinite number of half-planes; as aconsequence, E(A) is contained in the numerical rangeand represents an improvement in localizing the spectrumof A.Michael TsatsomerosDept. of MathematicsWashington State University, [email protected]

Panayiotis PsarrakosDept. of MathematicsNational Technical University of Athens, [email protected]

IP 3. An iterative linear algebra perspective oncompressed sensing and matrix completionCompressed sensing and matrix completion aretechniques by which simplicity in data can be exploited toallow for sensing at the information rate. Compressedsensing is typically cast as seeking to measure anunknown vector that has few non-zeros, but in unknownlocations, and to make the measurements through theaction of a matrix. Compressed sensing establishes thatwhen the vector to be measured is length n, but has k nnonzeros, the measurement matrix need only have order k

rows, which is the minimum order possible. Matrixcompletion is a similar question, but where the matrixmeasured is assumed to be of low rank, and one wouldlike to recover the matrix from only few of its entries.Both compressed sensing and matrix completion havenatural non-convex optimization formulations for therecovery of the vector and matrix respectively; typicallythe non-convex objections in these formulations arereplaced with their convex envelope, and well studiedconvex optimization algorithms applied. The convexformulations are also amenable to detailed analysis,making quantitatively accurate estimates as to the numberof measurements needed for the recovery using convexoptimization. In this talk we consider an alternativeperspective on compressed sensing and matrixcompletion, inspired by iterative linear algebraalgorithms. We show shat the average case behaviour ofsurprisingly simple algorithms that directly try to solvethe non-convex problem can perform as well or better thancan the convex approximations.Jared TannerThe School of [email protected]

IP 4. Extending preconditioned GMRES to nonlinearoptimizationPreconditioned GMRES is a powerful iterative method forlinear systems. In this talk, I will show how the concept ofpreconditioned GMRES can be extended to the generalclass of nonlinear optimization problems. I will present anonlinear GMRES (N-GMRES) optimization method,which combines preliminary iterates generated by astand-alone simple optimization method (the nonlinearpreconditioner) to produce accelerated iterates in ageneralized Krylov space. The nonlinear accelerationprocess, which is closely related to existing accelerationmethods for nonlinear systems, is combined with a linesearch in every step to obtain a general nonlinearoptimization method for which global convergence can beproved when steepest-descent preconditioning is used.Numerical tests show that N-GMRES is competitive withestablished nonlinear optimization methods, andoutperforms them for a difficult canonical tensorapproximation problem when an advanced nonlinearpreconditioner is used. This suggests that the real powerof N-GMRES may lie in its ability to use powerfulproblem-dependent nonlinear preconditioners.Hans De SterckDeptartment of Applied MathematicsUniversity of Waterloo, [email protected]

IP 5. Matrix iterations to summarize evolvingnetworksI will describe a new algorithm for summarizing

Page 11: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

11 2012 SIAM Conference on Applied Linear Algebra

properties of evolving networks. This type of data,recording connections that come and go over time, isbeing generated in many modern applications, includingtelecommunications and on-line human social behavior.The algorithm computes a dynamic measure of how wellpairs of nodes can communicate by taking account ofroutes through the network that respect the arrow of time.The conventional approach of downweighting for length(messages become cor- rupted as they are passed along) iscombined with the novel feature of downweighting forage (messages go out of date). This allows us togeneralize widely used centrality measures that haveproved popular in static network science to the case ofdynamic net- works sampled at non-uniform points intime. In particular, indices can be computed thatsummarize the current ability of each node to broadcastand receive information. The computations involve basicoperations in linear algebra, and the asymmetry caused bytimes arrow is captured naturally through thenon-commutativity of matrix-matrix multiplication. I willgive illustrative examples on both synthetic and real-worldcom- munication data sets. This talk covers joint workwith Ernesto Estrada, Peter Grindrod and Mark Parsons.Des HighamDept. of Mathematics and StatisticsUniversity of Strathclyde, Scotnald, UK

IP 6. PDE-constrained optimizationThe term ’PDE-constrained optimization’ refers tooptimization problems which involve partial differentialequations as side constraints. Problems of this class arise,for instance, in parameter identification as well as optimalcontrol. Dealing with them appropriately and efficientlytouches upon a wide range of topics, such as optimizationtheory in function spaces, taylored discretizations, errorestimation, and numerical linear algebra. In thispresentation we focus on some of the properties of linearsystems arising as optimality conditions for discretizedoptimal control problems. Particular attention will begiven to problems with additional inequality constraints,which exhibit some unexpected features.Roland HerzogChemnitz University of Technology, [email protected]

IP 7. Hierarchical tensor decomposition andapproximationThe (numerical) linear algebra related to tensorcomputations plays an immensely powerful role intheoretical and applied sciences: applied mathematics,biology, chemistry, information sciences, physics andmany other areas. In this talk we consider high-order orhigh-dimensional tensors of the form A ∈ Rn×···×n asthey appear in multivariate (discrete or continuous)

approximation problems. These require special techniquesin order to allow a representation and approximation forhigh dimension d (cf. the curse of dimension). A typicalexample is the canonical polyadic (CP) format

Ai1,...,id =

r∑j=1

ar1(i1) · · · ard(id)

that requires O(d · n · r) degrees of freedom for therepresentation. However, this simple format comes alongwith several numerical obstructions, because it is highlynon-linear.In the last three years some new hierarchical formats(namely Tensor Train and Hierarchical Tucker) have beendeveloped and analysed. These include as a subset alltensors of CP format (rank parameter r fixed), but inaddition they allow for some nice linear algebraapproaches, e.g. a hierarchical SVD in complexityO(d · n · r3). The hierarchical SVD is similar to the SVDfor matrices in the sense that it allows us to reliablycompute a quasi-best approximation in the hierarchicalformat.We will outline the basic linear algebra behind the newapproaches as well as open questions and possibleapplications.Lars GrasedyckInstitute for Geometry and Practical MathematicsRWTH [email protected]

IP 8. Improving performance and robustness ofincomplete factorization preconditionersWe will present methods for improving the performanceand robustness of incomplete factorizationpreconditioners for solving both SPD and general sparselinear systems. A key technique that results in significantimprovement is the use of block rows and columns, whichis commonplace in direct solvers. This enables theconstruction of denser and more robust preconditionerswithout the excessive cost traditionally associated with theconstruction of such preconditioners. We presentempirical results to show that in most cases, thepreconditioner density can be increased in amemory-neutral manner for solving unsymmetric systemswith restarted GMRES because the restart parameter canbe reduced without compromising robustness. In additionto blocking, we will also present some relatively simplebut effective adaptive heuristics for improvingpreconditioning based on incomplete factorization. Theseinclude robust management of breakdown of incompleteCholesky factorization, use of a flexible fill-factor, andautomatic tuning of drop tolerance and fill-factor inapplications requiring solution of a sequence of linearsystems.Anshul Gupta

Page 12: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 12

Business Analytics and Mathematical SciencesIBM T.J. Watson Research Center, Yorktown Heights, NY 10598, [email protected]

IP 9. Solving nonlinear eigenvalue problems inelectronic structure calculationThe Kohn-Sham density functional theory is the mostwidely used theory for studying electronic properties ofmolecules and solids. It reduces the need to solve amany-body Schrodinger’s equation to the task of solving asystem of single-particle equations coupled through theelectron density. These equations can be viewed as anonlinear eigenvalue problem. Although they contain farfewer degrees of freedom, these equations are moredifficult in terms of their mathematical structures. In thistalk, I will give an overview on efficient algorithms forsolving this type of problems. A key concept that isimportant for understanding these algorithms is anonlinear map known as the Kohn-Sham map. Theground state electron density is a fixed point of this map. Iwill describe properties of this map and its Jacobian.These properties can be used to develop effectivestrategies for accelerating Broyden’s method for findingthe optimal solution. They can also be used to reduce thecomputational complexity associated with the evaluationof the Kohn-Sham map, which is the most expensive stepin a Broyden iteration.Chao YangComputational Research Division,Lawrence Berkeley National Laboratory, Berkeley, CA, [email protected]

IP 10. Randomized algorithms in linear algebraThe introduction of randomization in the design andanalysis of algorithms for matrix computations (such asmatrix multiplication, least-squares regression, theSingular Value Decomposition (SVD), etc.) over the lastdecade provided a new paradigm and a complementaryperspective to traditional numerical linear algebraapproaches. These novel approaches were motivated bytechnological developments in many areas of scientificresearch that permit the automatic generation of large datasets, which are often modeled as matrices.In this talk we will outline how such approaches can beused to approximate problems ranging from matrixmultiplication and the Singular Value Decomposition(SVD) of matrices to approximately solving least-squaresproblems and systems of linear equations. Application ofthe proposed algorithms to data analysis will also bediscussed.Petros DrineasDepartment of Computer ScienceRensselaer Polytechnic Institute, [email protected]

IP 11. Computations with some classes of matrices

related to P-matricesA square matrix is called a P-matrix if all its leadingprincipal minors are positive. Sublasses of P-matricesvery important in applications are the nonsingular totallynonnegative matrices and the nonsingular M-matrices.Other classes of P-matrices used for eigenvaluelocalization are also presented. We also present somerecent results and applications of the following twoclasses of matrices: sign-regular matrices (which containsthe class of totally nonnegative matrices) and H-matrices(which contains the class of M-matrices). Let us recallthat nonsingular H-matrices are, in fact, strictly diagonallydominant matrices up to a column scaling. For diagonallydominant matrices and some subclasses of nonsingulartotally nonnegative matrices, accurate methods forcomputing their singular values, eigenvalues or inverseshave been obtained, assuming that adequate naturalparameters are provided. We present some recentextensions of these methods to other related classes ofmatrices.Juan Manuel PenaDept. of Applied MathematicsUniversidad de Zaragoza, [email protected]

IP 12. Reduction of quadratic matrix polynomials totriangular formThere is no analogue of the generalized Schurdecomposition for quadratic matrix polynomials in thesense that Q(λ) = λ2A2 + λA1 +A0 ∈ C[λ]n×n cannot,in general, be reduced to triangular formT (λ) = λ2I + λT1 + T0 by equivalence transformationsindependent of λ. We show that there exist matrixpolynomials E(λ), F (λ) with constant determinants suchthat E(λ)Q(λ)F (λ) = T (λ) is a triangular quadraticmatrix polynomial having the same eigenstructure asQ(λ).For any linearization λI2n −A of Q(λ) with A2

nonsingular, we show that there is a matrix U ∈ C2n×n

with orthonormal columns such that S = [U AU ] isnonsingular and S−1(λI2n −A)S = λI2n −

[0I−T0

−T1

]is a

companion linearization of an n× n triangular quadraticT (λ) = λ2I + λT1 + T0 equivalent to Q(λ). Finally, wedescribe a numerical procedure to compute U and T (λ).Francoise TisseurSchool of MathematicsThe University of Manchester, [email protected]

MS 1. Recent advances in matrix functionsTalk 1. Computational issues related to the geometric meanof structured matricesIn several applications it is required to compute the geometricmean of a set of positive definite matrices. In certain cases, likein the design of radar systems, the matrices are structured, say,

Page 13: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

13 2012 SIAM Conference on Applied Linear Algebra

they are Toeplitz, and one expects that the mean still keeps thesame structure.Unfortunately, the available definitions of geometric mean, likethe Karcher mean, do not generally maintain the structure of theinput matrices.In this talk we introduce a definition of mean which preservesthe structure, analyze its properties and present algorithms for itscomputation.Dario A. BiniDipartimento di MatematicaUniversita di Pisa, [email protected]

Bruno IannazzoDipartimento di Matematica e InformaticaUniversita di Perugia, [email protected]

Talk 2. Efficient, communication-minimizing algorithms forthe symmetric eigenvalue decomposition and the singularvalue decompositionWe propose algorithms for computing the symmetric eigenvaluedecomposition and the singular value decomposition (SVD) thatminimize communication in the asymptotic sense whilesimultaneously having arithmetic operation costs within a factor3 of that for the most efficient existing algorithms. The essentialcost for the algorithms is in performing QR factorizations, ofwhich we require no more than six for the symmetriceigenproblem and twelve for the SVD. We establish backwardstability under mild assumptions, and numerical experimentsindicate that our algorithms tend to yield decompositions withconsiderably smaller backward error and eigen/singular vectorscloser to orthogonal than existing algorithms.Yuji NakatsukasaSchool of MathematicsThe University of [email protected]

Nicholas J. HighamSchool of MathematicsThe University of [email protected]

Talk 3. The Pade approximation and the matrix sign functionIn the talk we focus on the properties of the Pade approximantsof a certain hypergeometric function on which the Pade familiesof iterations for computing the matrix sign and sector functionsare based. We have determined location of poles of the Padeapproximants and we have proved that all coefficients of thepower series expansions of the reciprocals of the denominatorsof the Pade approximants and of the power series expansions ofthe Pade approximants are positive. These properties are crucialin the the proof of the conjecture – stated by Laszkiewicz andZietak, and extending the result of Kenney and Laub – on aregion of convergence of these Pade families of iterations.The talk is based on the paper by O. Gomilko, F. Greco, K.Zietak and the paper by O. Gomilko, D.B. Karp, M. Lin, K.Zietak. In the talk we also consider a certain different kind ofrational approximation to the sign function.Krystyna ZietakInstitute of Mathematics and Computer ScienceWrocław University of [email protected]

Talk 4. A recursive blocked schur algorithm for computingthe matrix square rootThe Schur algorithm for computing a matrix square root reduces

the matrix to the Schur triangular form and then computes asquare root of the triangular matrix. In this talk I will describe arecursive blocking technique in which the computation of thesquare root of the triangular matrix can be made rich in matrixmultiplication. Numerical experiments making appropriate useof level 3 BLAS show significant speedups over the pointalgorithm, both in the square root phase and in the algorithm as awhole. The excellent numerical stability of the point algorithm isshown to be preserved by recursive blocking. Recursive blockingis also shown to be effective for multiplying triangular matrices.Edvin DeadmanUniversity of [email protected]

Nicholas J. HighamSchool of MathematicsUniversity of [email protected]

Rui RalhaCenter of MathematicsUniversity of Minho, Portugalr [email protected]

MS 2. Methods for Toeplitz matrices and theirapplicationTalk 1. Toeplitz operators with matrix-valued symbols andsome (unexpected) applicationsWe discuss the eigenvalue distribution in the Weyl sense ofgeneral matrix-sequences associated to a symbol. As a specificcase we consider Toeplitz sequences generated by matrix-valued(non Hermitian) bounded functions. We show that the canonicaldistribution can be proved under mild assumptions on thespectral range of the given symbol. Finally some applicationsare introduced and discussed.Stefano Serra CapizzanoDipartimento di Fisica e MatematicaUniversita dell’Insubria - sede di Como, [email protected]

Talk 2. Fast approximation to the Toeplitz matrix exponentialThe shift-invert Lanczos (or Arnoldi) method is employed togenerate an orthonormal basis from the Krylov subspacecorresponding to the real Toeplitz matrix and an initial vector.The vectors and recurrence coefficients produced by this methodare exploited to approximate the Toeplitz matrix exponential.Toeplitz matrix inversion formula and rapid Toeplitzmatrix-vector multiplications are utilized to lower thecomputational costs. For convergence analysis, a sufficientcondition is established to guarantee that the error bound isindependent of the norm of the matrix. Numerical results andapplications to the computational finance are given todemonstrate the efficiency of the method.Hai-Wei SunFaculty of Science and TechnologyUniversity of Macau, [email protected]

Talk 3. Matrix algebras sequences can be spectrallyequivalent with ill-conditioned Toeplitz onesThe construction of fast and efficient iterative procedure oreffective multigrid schemes, requires the study of the spectrumof the matrix sequences Ann, whereAnn = P−1

n (f)Tn(f)n. In this talk, we focus on the casewhere Tn(f) is a Toeplitz matrix generated by a nonnegative

Page 14: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 14

real function f , and Pn(f) denotes matrices belonging to tau orcirculant algebras. Assuming that the symbol f has discreteroots of non integer order, we will show that under suitableassumptions the spectrum of the matrix sequence Ann isbounded by constants far away from zero and infinity. Using thedeveloped theory, we propose effective preconditioners and wegive hints for optimal multigrid schemes.Paris VassalosDept. of InformaticsAthens University of Economics and [email protected]

Dimitrios NoutsosDept. of MathematicsUniversity of [email protected]

Talk 4. Aggregation-based multigrid methods for ToeplitzmatricesAggregation and smoothed aggregation based multigrid methodscan be analyzed in the context of the convergence theory forToeplitz and circulant matrices. As in the aggregation-basedmultigrid methods, for generating symbols with a single isolatedzero at the origin aggregates are formed. The interpolation canthen be improved by applying an additional operator. Thisimprovement can be interpreted as the smoothing step in theoriginal smoothed aggregation method. Depending on theoriginal generating symbol, several smoothing steps can benecessary. Numerical examples show the efficiency of theaggregation based approach in the case of Toeplitz and circulantmatrices.Matthias BoltenDepartment of Mathematics and ScienceUniversity of Wuppertal, [email protected]

Marco DonatelliDipartimento di Fisica e MatematicaUniversita dell’Insubria - sede di Como, [email protected]

Thomas HuckleDept. of InformaticsTechnische Universitat Munchen, [email protected]

MS 3. Matrix factorizations and applicationsTalk 1. Classes of matrices with bidiagonal factorizationMatrices with a bidiagonal decomposition satisfying some signrestrictions are analyzed. They include all nonsingular totallypositive matrices, their matrices opposite in sign and theirinverses, as well as tridiagonal nonsingular H-matrices.Properties of these matrices are presented and the bidiagonalfactorization can be used to perform computations with highrelative accuracy.Alvaro BarrerasDept. of Applied Mathematics / IUMAUniversidad de [email protected]

Juan Manuel PenaDept. of Applied Mathematics / IUMAUniversidad de [email protected]

Talk 2. Cholesky factorization for singular matricesDirect and iterative solution methods for linear least-squaresproblems have been studied in Numerical Linear Algebra. Partof the difficulty for solving least-squares problems is the fact

that many methods solve the system by implicitly solving thenormal equations ATAx = AT b with A ∈ Rn×m and b ∈ Rn.In this talk we consider this problem when A is a rank deficientmatrix. Then, ATA is a positive semidefinite matrix and itsCholesky factorization is not unique. So, we introduce a fullrank Cholesky decomposition LLT of the normal equationsmatrix without the need to form the normal matrix itself. Wepresent two algorithms to compute the entries of L by rows (orby columns) with the corresponding error analysis. Numericalexperiments illustrating the proposed algorithms are given.Rafael CantoDept. of Applied MathematicsUniversitat Politecnica de [email protected]

Ana M. UrbanoDept. of Applied MathematicsUniversitat Politecnica de [email protected]

Marıa J. PelaezDept. of MathematicsUniversidad Catolica del [email protected]

Talk 3. Applications of the singular value decomposition toperturbation theory of eigenvalues of matrix polynomialsIn this talk, motivated by a problem posed by Wilkinson, westudy the coefficient perturbations of a n× n matrix polynomialto n× n matrix polynomials which have a prescribed eigenvalueof specified algebraic multiplicity and index of annihilation. Foran n× n matrix polynomial P (λ) and a given scalar µ ∈ C, weintroduce two weighted spectral norm distances, Er(µ) andEr,k(µ), from P (λ) to the n× n matrix polynomials that have µas an eigenvalue of algebraic multiplicity at least r and to thosethat have µ as an eigenvalue of algebraic multiplicity at least rand maximum Jordan chain length exactly k, respectively. Thenwe obtain a singular value characterization of Er,1(µ), andderive a lower bound for Er,k(µ) and an upper bound for Er(µ),constructing associated perturbations of P (λ).Panayiotis PsarrakosDept. of MathematicsNational Technical University of [email protected]

Nikolaos PapathanasiouDept. of MathematicsNational Technical University of [email protected]

Talk 4. On reduced rank nonnegative matrix factorizationfor symmetric nonnegative matricesFor a nonnegative matrix V ∈ Rm,n, the nonnegative matrixfactorization (NNMF) problem consists of finding nonnegativematrix factors W ∈ Rm,r and H ∈ Rr,n such that V ≈WH .We consider the algorithm provided by Lee and Seung whichfinds nonnegative W and H such that ‖V −WH‖F isminimized. For the case m = n and in which V is symmetric,we present results concerning when the best approximatefactorization results in the product WH being symmetric and oncases in which the best approximation cannot be a symmetricmatrix. Results regarding other special cases as well asapplications are also discussed.Minerva CatralDept. of MathematicsXavier [email protected]

Page 15: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

15 2012 SIAM Conference on Applied Linear Algebra

Lixing HanDept. of MathematicsUniversity of [email protected]

Michael NeumannDept. of MathematicsUniversity of Connecticut

Robert J. PlemmonsDept. of Mathematics and Computer ScienceWake Forest [email protected]

MS 4. Algorithms on manifolds of low-rank matricesand tensorsTalk 1. Low rank dynamics for computing extremal points ofreal and complex pseudospectraWe consider the real ε-pseudospectrum of a square matrix,which is the set of eigenvalues of all real matrices that areε-close to the given matrix, where closeness is measured ineither the 2-norm or the Frobenius norm.We characterize extremal points and compare the situation withthat for the unstructured ε-pseudospectrum. We presentdifferential equations for low rank (1 and 2) matrices for thecomputation of the extremal points of the pseudospectrum.Discretizations of the differential equations yield algorithms thatare fast and well suited for sparse large matrices. Based on theselow-rank differential equations, we further obtain an algorithmfor drawing boundary sections of the structured pseudospectrumwith respect to both considered norms.Nicola GuglielmiDipartimento di MatematicaUniversita di L’Aquila, [email protected]

Christian LubichMathematisches InstitutUniversitat [email protected]

Talk 2. Parametric model order reduction using stabilizedconsistent interpolation on matrix manifoldsA robust method for interpolating reduced-order linear operatorson their matrix manifolds is presented. Robustness is achievedby enforcing consistency between the different sets ofgeneralized coordinates underlying different instances ofparametric reduced-order models (ROMs), and explicitlystabilizing the final outcome of the interpolation process.Consistency is achieved by transforming the ROMs beforeinterpolation using a rotation operator obtained from the solutionof a generalized orthogonal Procrustes problem. Stabilization isachieved using a novel real-time algorithm based on semidefiniteprogramming. The overall method is illustrated with its on-lineapplication to the parametric fluid-structure analysis of awing-store configuration.David AmsallemDepartment of Aeronautics and AstronauticsStanford [email protected]

Charbel FarhatDepartment of Aeronautics and AstronauticsDepartment of Mechanical EngineeringInstitute for Computational Methods in EngineeringStanford [email protected]

Talk 3. Treatment of high-dimensional problems by low-rankmanifolds of tensorsMany problems in e.g. natural sciences, data mining andstatistics are naturally posed as high-dimensionalproblems.Tensor product representations parametrize theseproblem on approximation manifolds, often fixed by a givenmaximal rank for the tensor approximation. A relatively newformat is the HT/TT format as developed by Hackbusch(Leipzig) and Oseledets/Tyrtyshnikov (Moscow), whichovercomes the shortcomings of older formats and gives a stableand often sparse opportunity to represent high-dimensionalquantities, so that the treatment of e.g. high-dimensional partialdifferential equations is on the verge of realization. In this talk, ageneral framework to realize such tasks in data-sparse tensorformats is presented. For the HT/TT format, we present sometheoretical as well as algorithmic results for the treatment ofhigh-dimensional optimization tasks and high-dimensionalevolution equations.Thorsten RohwedderInstitut fur MathematikTechnische Universitat [email protected]

Talk 4. Local convergence of alternating optimization ofmultivariate functions in the presence of scalingindeterminaciesAn easy and widely used approach to minimize functions withrespect to certain tensor formats (such as CP, Tucker, TT or HT)is the alternating optimization algorithm, also called nonlinearGauss–Seidel method. A prominent example is thePARAFAC-ALS algorithm. Due to the usual non-uniqueness oftensor representations, standard convergence results fornonlinear Gauss-Seidel are usually not directly applicable. Inthis talk we present a quite general approach to prove localconvergence, based on a geometric viewpoint, which regardstensors of fixed rank as orbits of a Lie group generatingequivalent representations.Andre UschmajewInstitut fur MathematikTechnische Universitat (TU) [email protected]

MS 5. Advances in algebraic multigrid - Newapproaches and applicationsTalk 1. Algebraic collocation coarse approximation multigridMost algebraic multigrid (AMG) methods define the coarseoperators by applying the (Petrov-)Galerkin coarseapproximation where the sparsity pattern and operatorcomplexity of the multigrid hierarchy is dictated by themultigrid prolongation and restriction. Therefore, AMGalgorithms usually must settle on some compromise between thequality of these operators and the aggressiveness of thecoarsening, which affects their rate of convergence and operatorcomplexity. In this paper we propose an algebraic generalizationof the collocation coarse approximation (CCA) approach ofWienands and Yavneh, where the choice of the sparsity patternof the coarse operators is independent of the choice of thehigh-quality transfer operators. The new algorithm is based onthe aggregation framework (smoothed and non-smoothed).Using a small set of low-energy eigenvectors, it computes thecoarse grid operator by a weighted least squares process.

Page 16: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 16

Numerical experiments for two dimensional diffusion problemswith sharply varying coefficients demonstrate the efficacy andpotential of this multigrid algorithm.Eran TreisterDepartment of Computer ScienceTechnion—Israel Institute of Technology, [email protected]

Ran ZemachDepartment of Computer ScienceTechnion—Israel Institute of Technology, [email protected]

Irad YavnehDepartment of Computer ScienceTechnion—Israel Institute of Technology, [email protected]

Talk 2. Energy-minimization interpolation for adaptivealgebraic multigridAdaptive algebraic multigrid (AMG) methods automaticallydetect the algebraically smooth error for a linear system and assuch, are robust black box approaches for solving difficultproblems. However, the two main families of methods,Bootstrap AMG (BAMG) and adaptive smoothed aggregation(αSA), suffer from drawbacks, such as noisy candidate vectors(BAMG) and potentially high operator complexity, especiallyfor scalar problems, (αSA). This work is intended to addresssome of these drawbacks by combining elements of BAMG andαSA through an energy-minimization interpolation framework.While the approach is general, the challenging, motivatingproblem is a biquadratic discretization of anisotropic diffusion.Jacob B. SchroderCenter for Applied Scientific ComputingLawrence Livermore National Laboratory (LLNL), [email protected]

Robert D. FalgoutCenter for Applied Scientific ComputingLawrence Livermore National Laboratory (LLNL), [email protected]

Talk 3. Algebraic multigrid (AMG) for complex networkcalculationsClustering, ranking, or measuring distance for vertices onscale-free graphs are all computational task of interest. Suchcalculations may be obtained by solving eigensystems or linearsystems involving matrices whose sparsity structure is related toan underlying scale-free graph. For many large systems ofinterest, classical iterative solvers (such as conjugate gradientand Lanczos) converge with prohibitively slow rates, due toill-conditioned matrices and small spectral gap ratios. Efficientlypreconditioning these systems with multilevel methods isdifficult due to the scale-free topology. For some large modelproblems and real-world networks, we investigate theperformance of a few AMG-related coarsening approaches.Geoffrey D. SandersCenter for Applied Scientific ComputingLawrence Livermore National Laboratory (LLNL), [email protected]

Van Emden HensonCenter for Applied Scientific ComputingLawrence Livermore National Laboratory (LLNL), [email protected]

Panayot S. VassilevskiCenter for Applied Scientific ComputingLawrence Livermore National Laboratory (LLNL), [email protected]

Talk 4. The polynomial of best uniform approximation to 1/xas smoother in two grid methodsWe discuss a simple convergence analysis of two level methodswhere the relaxation on the fine grid uses the polynomial of bestapproximation in the uniform norm to 1/x on a finite intervalwith positive endpoints. The construction of the latterpolynomial is of interest by itself, and we have included aderivation of a three-term recurrence relation for computing thispolynomial. We have also derived several inequalities related tothe error of best approximation, monotonicity of the polynomialsequence, and positivity which we applied in the analysis oftwo-level methods.Ludmil T. ZikatanovDepartment of MathematicsThe Pennsylvania State University, [email protected]

Johannes KrausJohann Radon Institute for Computational and Applied Mathematics(RICAM), [email protected]

Panayot S. VassilevskiCenter for Applied Scientific ComputingLawrence Livermore National Laboratory (LLNL), [email protected]

MS 6. Recent advances in fast iterative solvers - Part Iof IITalk 1. Challenges in analysis of Krylov subspace methodsThe current state-of-the art of iterative solvers is the outcome ofthe tremendous algorithmic development over the last fewdecades and of investigations of how to solve given problems. Inthis contribution we focus on Krylov subspace methods andmore on the dual question why things do or do not work. Inparticular, we will pose and discuss open questions such as whatthe spectral information tells us about the behaviour of Krylovsubspace methods, to which extent we can relax the accuracy oflocal operations in inexact Krylov subspace methods withoutcausing an unwanted delay, how important is considering ofrounding errors in various algorithmic techniques, whether it isuseful to view Krylov subspace methods as matching momentmodel reduction, and how the algebraic error can be includedinto locally efficient and fully computable a-posteriori errorbounds for adaptive PDE solvers.Zdenek StrakosCharles University in Prague and Academy of Science of the [email protected]

Jorg LiesenInstitute of MathematicsTechnical University of [email protected]

Talk 2. Updating preconditioners for parameterized systemsParameterized linear systems arise in a range of applications,such as model reduction, acoustics, and inverse problems basedon parametric level sets. Computing new preconditioners formany small changes in parameter(s) would be very expensive.However, using one or only a few preconditioners for all systemsleads to an excessive total number of iterations. We showstrategies to efficiently compute effective multiplicative updatesto preconditioners. This has the advantage that, for a slightincrease in overhead, the update is more or less independent ofthe original preconditioner used.

Page 17: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

17 2012 SIAM Conference on Applied Linear Algebra

Eric de SturlerDepartment of MathematicsVirginia [email protected]

Sarah WyattDepartment of MathematicsVirginia [email protected]

Serkan GugercinDepartment of MathematicsVirginia [email protected]

Talk 3. Efficient preconditioning techniques for two-phaseflow simulationsIn this talk we present efficient preconditioning strategies,suitable for the iterative solution of algebraic systems, arising innumerical simulations of two-phase flow problems, modelled bythe Cahn-Hilliard equation.In this work, we decompose the original Cahn-Hilliard equation,which is nonlinear and of fourth-order, into a system of twosecond-order equations for the so-called ’concentration’ and’chemical potential’. The so-obtained nonlinear system isdiscretized using the finite element method and solved by twovariants of the inexact Newton method. The major focus of thework is to construct a preconditioner for the correcpondingJacobian, which is of block two-by-two form. The proposedpreconditioning techniques are based on approximatefactorization of the discrete Jacobian and utilise to a full extentthe properties of the underlying matrices.We propose a preconditioning method that reduces the problemof solving the non-symmetric discrete Cahn-Hilliard system tothe problem of solving systems with symmetric positive definitematrices, where off-the-shelf multilevel and multigrid algorithmsare directly applicable. The resulting iterative method exhibitsoptimal convergence and computational complexity properties.The efficiency of the resulting preconditioning method isillustrated via various numerical experiments, including largescale examples of both 2D and 3D problems.The preconditioning techniques are more generally applicablefor any algebraic system of the same structure as, e.g., whensolving complex symmetric linear systems.M. NeytchevaInstitute for Information TechnologyUppsala [email protected]

Owe AxelssonInstitute of GeonicsAcademy of Sciences of the Czech Republic, [email protected]

Petia BoyanovaInstitute for Information TechnologyUppsala [email protected]

Martin KronbichlerInstitute for Information TechnologyUppsala [email protected]

Xunxun WuInstitute for Information TechnologyUppsala [email protected]

Talk 4. Preconditioners in liquid crystal modellingLiquid crystal displays are ubiquitous in modern life, being usedextensively in monitors, televisions, gaming devices, watches,

telephones etc. Appropriate models feature characteristic lengthand time scales varying by many orders of magnitude, whichprovides difficult numerical challenges to those trying tosimulate real-life dynamic industrial situations. The efficientsolution of the resulting linear algebra sub-problems is of crucialimportance for the overall effectiveness of the algorithms used.In this talk we will present some examples of saddle-pointsystems which arise in liquid crystal modelling and discuss theirefficient solution using appropriate preconditioned iterativemethods.Alison RamageDept of Mathematics and StatisticsUniversity of StrathclydeGlasgow, [email protected]

Chris NewtonNext Generation Displays GroupHewlett-Packard LaboratoriesBristol, [email protected]

MS 7. Application of statistics to numerical linearalgebra algorithms - Part I of IITalk 1. Fast linear system solvers based on randomizationtechniquesWe illustrate how dense linear algebra calculations can beenhanced by randomizing general or symmetric indefinitesystems. This approach, based on a multiplicativepreconditioning of the initial matrix, revisits the work from[Parker, 1995]. It enables us to avoid pivoting and then to reducesignificantly the amount of communication. This can beperformed at a very affordable computational price whileproviding a satisfying accuracy. We describe solvers based onrandomization that take advantage of the latest generation ofmulticore or hybrid multicore/GPU machines and we comparetheir Gflop/s performance with solvers from standard parallellibraries.Marc BaboulinInria Saclay Ile-de-FranceUniversity Paris-Sud, [email protected]

Dulceneia BeckerUniversity of Tennessee, [email protected]

Jack DongarraUniversity of Tennessee, [email protected]

Stanimire TomovUniversity of Tennessee, [email protected]

Talk 2. Numerical issues in randomized algorithmsRandomized algorithms are starting to find their way into a widevariety of applications that give rise to massive data sets. Thesealgorithms downsize the enormous matrices by picking andchoosing only particular columns or rows, thereby producingpotentially huge savings in storage and computing speed.Although randomized algorithms can be fast and efficient, notmuch is known about their numerical properties. We will discussthe numerical sensitivity and stability of randomized algorithms,as well as the error due to randomization, and the effect of thecoherence of the matrix. Algorithms under consideration includematrix multiplication, least squares solvers, and low-rankapproximations.

Page 18: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 18

Ilse IpsenDepartment of MathematicsNorth Carolina State University, [email protected]

Talk 3. Near-optimal column based matrix reconstructionWe consider low-rank reconstruction of a matrix using a subsetof its columns and we present asymptotically optimal algorithmsfor both spectral norm and Frobenius norm reconstruction. Themain tools we introduce to obtain our results are: (i) the use offast approximate SVD-like decompositions for column-basedmatrix reconstruction, and (ii) two deterministic algorithms forselecting rows from matrices with orthonormal columns,building upon the sparse representation theorem fordecompositions of the identity that appeared in [Batson,Spielman, Srivastava, 2011].Christos BoutsidisMathematical Sciences DepartmentIBM T. J. Watson Research Center, [email protected]

Petros DrineasComputer Science DepartmentRensselaer Polytechnic Institute, [email protected]

Malik Magdon-IsmailComputer Science DepartmentRensselaer Polytechnic Institute, [email protected]

Talk 4. Numerical experiments with statistical conditionestimationIn this talk we present the results of some numerical experimentsillustrating the use of statistical condition estimation (SCE).After a brief review of SCE, the use of the technique isdemonstrated on a few topics of general interest including theestimation of the condition of large sparse linear systems.Alan J. LaubDepartment of MathematicsUniversity of California Los Angeles, [email protected]

MS 8. Rational Krylov methods: analysis andapplications - Part I of IITalk 1. Solving Sylvester equations through rational GalerkinprojectionsRecently (B. Beckermann, An error analysis for rationalGalerkin projection applied to the Sylvester equation, SIAM J.Num. Anal. 49 (2012), 2430-2450), we suggested a new erroranalysis for the residual of Galerkin projection onto rationalKrylov spaces applied to a Sylvester equation with a rank 1right-hand side. In this talk we will consider more generallysmall rank right-hand sides, where block Krylov methods andtangential interpolation problems play an important role.Bernhard BeckermannDept. of MathematicsUniversity of [email protected]

Talk 2. Stability-corrected spectral Lanczos decompositionalgorithm for wave propagation in unbounded domainsApplying Krylov subspace methods to exterior wave fieldproblems has become an active topic of recent research. In thispaper, we introduce a new Lanczos-based solution method viastability-corrected operator exponents, allowing us to constructstructure-preserving reduced-order models (ROMs) respecting

the delicate spectral properties of the original scatteringproblem. The ROMs are unconditionally stable and are based ona renormalized Lanczos algorithm, which enables us toefficiently compute the solution in the frequency and timedomain. We illustrate the performance of our method through anumber of numerical examples in which we simulate 2Delectromagnetic wave propagation in unbounded domains.Rob RemisFaculty of Electrical Engineering, Mathematics and Computer ScienceDelft University of [email protected]

Vladimir DruskinSchlumberger-Doll [email protected]

Talk 3. Generalized rational Krylov decompositionsThe notion of an orthogonal rational Arnoldi decomposition, asintroduced by Axel Ruhe, is a natural generalization of thewell-known (polynomial) Arnoldi decomposition. Generalizedrational Krylov decompositions are obtained by removing theorthogonality assumption. Such decompositions are interestinglinear algebra objects by themselves and we will study some oftheir algebraic properties, as well as the rational Krylov methodsthat can be associated with them.Stefan GuttelMathematical InstituteUniversity of [email protected]

Talk 4. Interpolatory model reduction strategies for nonlinearparametric inversionWe will show how reduced order models can significantlyreduce the cost of general inverse problems approached throughparametric level set methods. Our method drastically reduces thesolution of forward problems in diffuse optimal tomography(DOT) by using interpolatory, i.e. rational Krylov based,parametric model reduction. In the DOT setting, these surrogatemodels can approximate both the cost functional and associatedJacobian with little loss of accuracy and significantly reducedcost.Serkan GugercinDept. of MathematicsVirginia [email protected]

Christopher A. BeattieDept. of MathematicsVirginia [email protected]

Saifon ChaturantabutDept. of MathematicsVirginia [email protected]

Eric de SturlerDept. of MathematicsVirginia [email protected]

Misha E. KilmerDept. of MathematicsTufts [email protected]

MS 9. New trends in tridiagonal matrices - Part I of IITalk 1. Direct and inverse problems on pseudo-Jacobimatrices

Page 19: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

19 2012 SIAM Conference on Applied Linear Algebra

A theorem of Friedland and Melkman states the unique recoveryof a non-negative Jacobi matrix from the spectra of its upper andlower principal submatrices obtained by deleting the kth row andcolumn. Here this result is revisited and generalized. Otherrelated problems are also investigated, including existence anduniqueness theorems.Natalia BebianoDepartment of MathematicsUniversity of Coimbra3001-454 [email protected]

Susana FurtadoFaculty of EconomicsUniversity of Porto4200-464 [email protected]

J. da ProvidenciaDepartment of PhysicsUniversity of Coimbra3004-516 [email protected]

Talk 2. Schwartz’s matrices and generalized HurwitzpolynomialsWe present solutions of direct and inverse spectral problems fora kind of Schwarz’s matrices (tridiagonal matrices with only onenonzero main-diagonal entry). We study dependence of spectraof such matrices on the signs of their off-main-diagonal entries.We show that for certain distributions of those signs, thecharacteristic polynomial of the correspondent Schwarz’s matrixis a generalized Hurwitz polynomial. Recall that a realpolynomial p(z) = p0(z2) + zp1(z2), where p0 and p1 are theeven and odd parts of p, respectively, is generalized Hurwitz ifand only if the zeroes of p0 and p1 are real, simple andinterlacing.Mikhail TyaglovInstitut fur MathematikTechnische Universitat BerlinSekretariat MA 4-5Straße des 17. Juni 13610623 [email protected]

Talk 3. On the Moore-Penrose inverse of singular, symmetricand periodic Jacobi M-matricesWe aim here at determining the Moore–Penrose inverse, J† ofany singular, symmetric and periodic Jacobi M–matrix J, boththroughout direct computations or considering it as aperturbation of a singular, symmetric and nonperiodic JacobiM–matrix. We tackle the problem by applying methods fromthe operator theory on finite networks, since the off–diagonalentries of J can be identified with the conductance function of aweighted n–cycle. Then, J appears as a positive–semidefiniteSchrodinger operator on the cycle and hence, J† is nothing elsethat the corresponding Green operator.We also consider the problem of characterizing when J† is itselfan M -matrix.Andres M. EncinasDepartament de Matematica Aplicada IIIUniversitat Politecnica de Catalunya08034 Barcelona

[email protected]

Enrique BenditoDepartament de Matematica Aplicada IIIUniversitat Politecnica de Catalunya08034 [email protected]

Angeles CarmonaDepartament de Matematica Aplicada IIIUniversitat Politecnica de Catalunya08034 [email protected]

Margarida MitjanaDepartament de Matematica Aplicada IUniversitat Politecnica de Catalunya08034 [email protected]

Talk 4. The commutant of the tridiagonal patternWe consider patterns that allow real commutativity with anirreducible tridiagonal n-by-n pattern. All are combinatoriallysymmetric. All also allow a complex symmetric commuting pair,but only some allow a real symmetric commuting pair. We alsoshow that any matrix that commutes with an irreducibletridiagonal matrix satisfies certain ratio equations.Generalizations to matrices with tree patterns are alsoconsidered.Charles R. JohnsonDepartment of MathematicsCollege of William and MaryPO Box 8795Williamsburg, VA [email protected]

MS 10. Numerical algorithms for switching systems:from theory to applicationsTalk 1. Observer design for hybrid systemsThe state estimation problem has been the subject of intensivestudy for many years by both the computer science communityin the discrete domain and the control community in thecontinuous domain, but only scantly investigated in the hybridsystem domain.In this talk, we present a design methodology for dynamicalobservers of hybrid systems with linear continuous-timedynamics, which reconstruct the complete state from theknowledge of the inputs and outputs of a hybrid plant.We demonstrate the methodology by building a hybrid observerfor an industrial automotive control problem: on-lineidentification of the actual engaged gear.Maria Domenica Di BenedettoUniversity of L’[email protected]

Talk 2. About polynomial instability for linear switchedsystemsIn this talk we present recent results on the characterization ofmarginal instability for linear switched systems. Our maincontribution consists in pointing out a resonance phenomenonassociated with marginal instability. In particular we derive anupper bound of the norm of the state at time t, which ispolynomial in t and whose degree is computed from the

Page 20: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 20

resonance structure of the system. We also derive analogousresults for discrete-time linear switched systems.Paolo MasonLaboratoire des Signaux et [email protected]

Yacine ChitourLaboratoire des Signaux et SystemesUniversite Paris [email protected]

Mario SigalottiCMAP, INRIA [email protected]

Talk 3. Stability and stabilization of positive switchedsystems: state of the art and open problemsA positive switched system (PSS) consists of a family of positivestate-space models and a switching law, specifying when andhow the switching among the various models takes place. Thisclass of systems has some interesting practical applications:PSS’s have been adopted for describing networks employingTCP and other congestion control applications, for modelingconsensus and synchronization problems, and, quite recently, fordescribing the viral mutation dynamics under drug treatment.In the talk we will provide a comprehensive picture of theconditions for stability and for stabilizability, and we will pointout some open problems.Maria Elena ValcherDept. of Information EngineeringUniversity of [email protected]

Ettore FornasiniDept. of Information EngineeringUniversity of [email protected]

Talk 4. The joint spectral radius for semigroups generated byswitched differential algebraic equationsWe introduce the joint spectral radius for matrix semigroups thatare generated by switched linear ordinary differential equationswith jumps. The jumps are modeled by projectors which areassumed to commute with the generators of the flow. This setupcovers switched differential algebraic equations. It is shown thatan exponential growth rate can only be defined if the discretesemigroup generated by the projections is product bounded.Assuming this is true we show that Barabanov norms may bedefined in the irreducible case and that a converse Lyapunovtheorem holds. Some further properties of the exponentialgrowth rate are discussed.Fabian WirthDept. of MathematicsUniversity of Wurzburg, [email protected]

Stephan TrennDept. of MathematicsTechnical University of Kaiserslautern, [email protected]

MS 11. Recent advances in fast iterative solvers -Part II of IITalk 1. Combination preconditioning of saddle-point systemsfor positive definitenessThere are by now many preconditioning techniques forsaddle-point systems but also a growing number of applicationswhere such are required. In this talks we will discuss

preconditioners which preserve self-adjointness in non-standardinner products and the combination of such preconditioners asintroduced by Martin Stoll and the first author in 2008.Here we will show how two preconditioners which ensureself-adjointness in different inner products, but which are bothindefinite may be combined to yield a positive definiteself-adjoint preconditioned system in a third inner product whichthus allows robust application of the Hestenes Steifel ConjugateGradient method in that inner product.Andrew J. WathenMathematical InstituteOxford University, [email protected]

Jennifer PestanaMathematical InstituteOxford University, [email protected]

Talk 2. Preconditioned iterative methods for nonsymmetricmatrices and nonstandard inner productsThe convergence of a minimum residual method applied to alinear system with a nonnormal coefficient matrix is not wellunderstood in general. This can make choosing an effectivepreconditioner difficult. In this talk we present a new GMRESconvergence bound. We also show that the convergence of anonstandard minimum residual method, applied to apreconditioned system, is bounded by a term that dependsprimarily on the eigenvalues of the preconditioned coefficientmatrix, provided the preconditioner and coefficient matrix areself-adjoint with respect to nearby Hermitian sesquilinear forms.We relate this result to the convergence of standard methods.Jennifer PestanaMathematical InstituteUniversity of [email protected]

Andrew J. WathenMathematical InstituteUniversity of [email protected]

Talk 3. Multi-preconditioned GMRESStandard Krylov subspace methods only allow the user to choosea single preconditioner. In many situations, however, there is no‘best’ choice, but a number of possible candidates. In this talkwe describe an extension of GMRES, multi-preconditionedGMRES, which allows the use of more than one preconditioner,and combines their properties in an optimal way. As well asdescribing some theoretical properties of the new algorithm wewill present numerical results which highlight the utility of theapproach.Tyrone ReesNumerical Analysis GroupRutherford Appleton [email protected]

Chen GreifDept. of Computer ScienceUniversity of British [email protected]

Daniel B. SzyldDept. of MathematicsTemple [email protected]

Talk 4. Bounds on the eigenvalues of indefinite matricesarising from interior-point methodsInterior-point methods feature prominently in the solution of

Page 21: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

21 2012 SIAM Conference on Applied Linear Algebra

constrained optimization problems, and involve the need to solvea sequence of 3× 3 block indefinite matrices that becomeincreasingly ill-conditioned throughout the iteration. Mostsolution approaches are based on reducing the system size usinga block Gaussian elimination procedure. In this talk we useenergy estimates to obtain bounds on the eigenvalues of theoriginal, augmented matrix, which indicate that the spectralstructure of this matrix may be favorable compared to matricesobtained by performing a partial elimination of variables beforesolving the system.Chen GreifDepartment of Computer ScienceThe University of British [email protected]

Erin MouldingDepartment of MathematicsThe University of British [email protected]

Dominique OrbanMathematics and Industrial Engineering DepartmentEcole Polytechnique de [email protected]

MS 12. Application of statistics to numerical linearalgebra algorithms - Part II of IITalk 1. Spectral graph theory, sampling matrix sums, andnear-optimal SDD solversWe present a near-optimal solver for Symmetric DiagonallyDominant (SDD) linear systems. The solver is a great exampleof the power of statistical methods in linear algebra and inparticular of sampling sums of positive semi-definite matrices.Crucial to the speed of the solver is the ability to perform fastsampling. In turn, key to fast sampling are low-stretch spanningtrees, a central notion in spectral graph theory and combinatorialpreconditioning. We will see that a low-stretch tree is essentiallya ‘spectral spine’ of a graph Laplacian, allowing us to create ahierarchy of spectrally similar graphs which lends itself to thevery fast multilevel solver.Ioannis KoutisComputer Science DepartmentUniversity of Puerto [email protected]

Gary MillerComputer Science DepartmentCarnegie Mellon University, [email protected]

Richard PengComputer Science DepartmentCarnegie Mellon University, [email protected]

Talk 2. Implementation of a randomization algorithm fordense linear algebra librariesRandomization algorithms are an alternative to pivoting forfactoring dense matrices to solve systems of linear equations.These techniques are more suitable to parallel architectureswhen compared to pivoting, requiring a reduced amount ofcommunication and no synchronization during the factorization.The core kernel requires the multiplication of sparse matriceswith a specific structure. The main issue in implementing suchalgorithms resides in the data dependency patterns, especiallyfor the symmetric case. An implementation for the PLASMAlibrary is described and traces are presented, together withperformance data.

Dulceneia BeckerUniversity of Tennessee, [email protected]

Marc BaboulinInria Saclay Ile-de-FranceUniversity Paris-Sud, [email protected]

Jack DongarraUniversity of Tennessee, [email protected]

Talk 3. Implementing randomized matrix algorithms inlarge-scale parallel environmentsRecent work from theoretical computer science on developingrandomized matrix algorithms has recently led to thedevelopment of high-quality numerical implementations. Here,we describe our parallel iterative least-squares solver LSRN. Theparallelizable normal random projection in the preconditioningphase leads to a very well-conditioned system. Hence, thenumber of iterations is fully predictable if we apply LSQR or theChebyshev semi-iterative method to the preconditioned system.The latter method is particularly efficient for solving large-scaleproblems on clusters with high communication cost, e.g., onAmazon Elastic Compute Cloud clusters, that are increasinglycommon in large-scale data applications.Michael W. MahoneyDepartment of MathematicsStanford University, [email protected]

Talk 4. Random sampling preconditionersIn the talk we study the use of preconditioners based on randomsampling of rows for accelerating the solution of linear systems.We argue that the fusion of randomization with preconditioningis what enables fast and reliable algorithms.We will discuss both dense and sparse matrices. For densematrices, we will describe Blendenpik, a least-square solver fordense highly overdetermined systems that outperforms LAPACKby large factors, and scales significantly better than anyQR-based solver. For sparse matrices, we relate randomsampling preconditioners to fast SDD solvers, and generalizesome of the techniques to finite element matrices.Haim AvronMathematical Sciences DepartmentIBM T. J. Watson Research Center; [email protected]

Sivan ToledoDepartment of Computer ScienceTel-Aviv University, [email protected]

MS 13. Rational Krylov methods: analysis andapplications - Part II of IITalk 1. Rational Krylov methods for nonlinear matrixproblemsThis talk is about the solution of non-linear eigenvalue problemsand linear systems with a nonlinear parameter. Krylov andRational Krylov methods are known to be efficient and reliablefor the solution of such matrix problems with a linear parameter.The nonlinear function can be approximated by a polynomial. Inearlier work, we suggested the use of Taylor expansions of thenonlinear function with an a priori undetermined degree. Thisled to the Taylor Arnoldi method that is an Arnoldi methodapplied to an infinite dimensional Companion linearization.

Page 22: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 22

When an interpolating polynomial is used instead of Taylorseries, there is a similar connection with the rational Krylovsubspace method applied on a linearization. The Krylovsubspaces enjoy similar properties as the linear case such asmoment matching and the convergence looks similar toconvergence for a linear problem. We present several choices ofpolynomials and also discuss ideas for future work.Karl MeerbergenDepartment of Computer [email protected]

Roel Van BeeumenDepartment of Computer [email protected]

Wim MichielsDepartment of Computer [email protected]

Talk 2. Block Gauss and anti-Gauss quadrature rules withapplication to networksThe symmetric and nonsymmetric block Lanczos processes canbe applied to compute Gauss quadrature rules for functionalswith matrix-valued measures. We show that estimates of upperand lower bounds for these functionals can be computedinexpensively by evaluating pairs of block Gauss and anti-Gaussrules. An application to network analysis is described.Lothar ReichelDept. of MathematicsKent State [email protected]

David MartinDept. of MathematicsKent State [email protected]

Talk 3. On optimality of rational Krylov based low-rankapproximations of large-scale matrix equationsIn this talk, we will discuss projection-based approximations forthe solution of Lyapunov equations of the form

AXET + EXAT +BBT = 0,

with A = AT , E = ET ∈ Rn×n and B ∈ Rn×m. Recently, arelation between minimizing the objective function

f :M→ R, X 7→ tr(XAXE +BBT

)on the manifoldM of symmetric positive semi-definite matricesof rank k and the L−norm defined by the operator

L := −E ⊗A−A⊗ E

together with the inner product 〈u, v〉L = 〈u,Lv〉 has beenshown. While so far this minimization problem was solved bymeans of a Riemannian optimization approach, here we willdiscuss an interpolation-based method which leads to the sameresults but relies on projecting the original Lyapunov equationonto a rational Krylov subspace. It will turn out that this can beachieved by the iterative rational Krylov algorithm (IRKA).Besides a generalization for the case of A 6= AT , we will alsodiscuss an extension for more general equations of the form

AXE + FXB + CD = 0,

with A,F ∈ Rn×n, B,E ∈ Rm×m, C ∈ Rn×p andD ∈ Rp×m.Tobias BreitenComputational Methods in Systems and Control TheoryMax Planck Institute for Dynamics of Complex Technical [email protected]

Peter BennerComputational Methods in Systems and Control TheoryMax Planck Institute for Dynamics of Complex Technical [email protected]

Talk 4. Inverse problems for large-scale dynamical systems inthe H2-optimal model reduction frameworkIn this work we investigate the Rational Krylov subspace (RKS)projection method with application to the inverse problems. Wederive a representation for the reduced Jacobian as the productof a time-dependent and a stationary part. Then we show that theRKS satisfying the Meier-Luenberger necessary H2 optimalitycondition not only minimizes the approximation error butcompletely annuls its influence on the inversion result (even ifthe subspace is not optimal globally). More precisely, theapproximation error belongs to the (left) null-space of thereduced Jacobian. We compare inversion on such subspacesusing other nearly optimal RKS’s based on Zolotarev problemand adaptive pole selection algorithm.Mikhail ZaslavskySchlumberger-Doll [email protected]

Vladimir DruskinSchlumberger-Doll [email protected]

Valeria SimonciniDepartment of MathematicsUniversity of Bologna, [email protected]

MS 14. New trends in tridiagonal matrices - Part IIof IITalk 1. On generalized Jacobi matrices which are symmetricin Krein spacesI will give some motivations to introduce generalized Jacobimatrices of a special type. Then, some direct and inverseproblems for these matrices will be presented.Maxim DerevyaginInstitut fur MathematikTechnische Universitat BerlinSekretariat MA 4-5Straße des 17. Juni 13610623 [email protected]

Talk 2. On the characteristic function for Jacobi matricesFor a certain class of infinite Jacobi matrices with a discretespectrum it is introduced a characteristic function as an analyticfunction on a suitable complex domain. It is shown that the zeroset of the characteristic function actually coincides with thepoint spectrum. As an illustration several explicitly solvableexamples are discussed.Pavel StovıcekDepartment of MathematicsFaculty of Nuclear Sciences and Physical EngineeringCzech Technical UniversityTrojanova 13120 00 Prague

Page 23: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

23 2012 SIAM Conference on Applied Linear Algebra

Czech [email protected]

Talk 3. Tridiagonal matrices in comb filtersIn the present talk we use some known and new results ontridiagonal matrices to compute the minimum mean square errorfor a decision feedback equalizer when the filter is a comb filter.We also apply those mathematical results on tridiagonal matricesto a problem in linear estimation with comb filters.Jesus Gutierrez-GutierrezCEIT and TecnunUniversidad de NavarraPaseo de Manuel Lardizabal, no. 1320018 Donostia - San [email protected]

Talk 4. The nullity theorem: forecasting structures in theinverses of sparse matricesThe Nullity Theorem for matrices was formulated by Markhamand Fiedler in 1986. At first sight, this theorem and its five-lineproof appear trivial. Examples show, however, that it is powerfultool in computations with sparse and structured rank matrices. Itis, e.g., a straightforward consequence of this theorem that theinverse of a tridiagonal is semiseparable, the inverse of a bandmatrix is higher order semiseparable, without even posingconstraints of irreducibility.We will reconsider the nullity theorem and extend it topredicting structures in LU and QR-factorizations.Raf VandebrilK.U. LeuvenDepartement Computerwetenschappen (Postbus 02402)Celestijnenlaan 200A3001 [email protected]

MS 15. Application of compressed sensing inBio-MedicineTalk 1. Evaluation of compressed sensing impact in cardiacsignals processing and transmissionSensor networks are a field which benefits significantly fromcompressed sensing (CS), as, for instance, quality of service, orbattery lifetime are directly improved by the subsequentreduction of traffic. Networks transmitting biomedical data, andwithin those, cardiac data, have particular advantages. ECG andPPG are the most studied 1D signals in the biomedical field, andwhere CS implementation has been more studied. One studyscenario is using the sparsity of wavelet decomposition of thesesignals, and Iterative Shrinkage/Thresholding algorithms. Focusis given in accuracy and computation overhead, the most criticalconstraints of cardiac data processing. Networking implicationsare quantified, for different types of real network models, as wellas the impact in signals quality, and on the extracted information(heart rate, oxygen saturation, pulse wave velocity).Eduardo PinheiroInstituto de TelecomunicaesInstituto Superior Tcnico, Lisboa, [email protected]

Talk 2. Compressive sensing in drug discoveryThe design of new drugs is often based on certain interactionsbetween a small molecule (the drug) and a protein. Often a highaffinity between the small molecule and the protein is desirable.The affinity can be estimated by using statistical

thermodynamics. In order to provide statistical data,high-performance computing machines are used to explore theconformational space of the molecular system. Thecorresponding algorithms are based on simplified physicalmodels (force fields) for the interaction between the drugmolecule and the protein. These models mostly do not takequantum effects into account. The talk will present new ideas tobetter include quantum chemical information, it will providesome important application areas of molecular simulation indrug discovery, and it gives some hints about using compressivesensing in this field.Marcus WeberZuse Institute Berlin (ZIB), [email protected]

Talk 3. Reconstruction of bacterial communities using sparserepresentationDetermining the identities and frequencies of species present ina sample is a central problem in metagenomics, with scientific,environmental and clinical implications. A popular approach tothe problem is sequencing the Ribosomal 16s RNA gene in thesample using universal primers, and using variation in the gene’ssequence between different species to identify the speciespresent in the sample. We present a novel framework forcommunity reconstruction, based on sparse representation; whilemillions of microorganisms are present on earth, with known 16ssequences stored in a database, only a small minority (typically afew hundreds) are likely to be present in any given sample. Wediscuss the statistical framework, algorithms used and results interms of accuracy and species resolution.Or ZukBroad Institute, Cambridge, [email protected]

Talk 4. Sensing genome via factorizationSince the last decade the matrix factorization and dimensionreduction methods have been used for mining in genomics andproteomics. Also most recently the norm one optimization isemerging as a useful approach for this purpose. We propose ahybrid algorithm which uses the utility of SVD along withefficiency and robustness of norm one minimization forovercoming the shortcomings of each of these mentionedapproaches.Amir NiknejadCollege of Mount Saint Vincent,New York, [email protected]

MS 16. Preconditioning of non-normal linear systemsarising in scattering problemsTalk 1. Approximate deflation preconditioning methods forpenetrable scattering problemsIn this talk I consider the Lippmann-Schwinger (LS) integralequation for inhomogeneous acoustic scattering. I demonstratethat spectral properties of the LS equations suggest that deflationbased preconditioning might be effective in accelerating theconvergence of a restarted GMRES method. Much of theconvergence results on deflation based preconditioning are basedon using exact invariant subspaces. I will present an analyticalframework for convergence theory of general approximatedeflation that is widely applicable. Furthermore, numericalillustrations of the spectral properties also reveal that asignificant portion of the spectrum is approximated well on

Page 24: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 24

coarse grids. To exploit this, I develop a novel restarted GMRESmethod with adaptive preconditioning based on spectralapproximations on multiple grids.Josef SifuentesCourant Institute of Mathematical SciencesNew York [email protected]

Mark EmbreeDepartment of Computational and Applied MathematicsRice [email protected]

Talk 2. Direct approximate factoring of the inverseTo precondition a large and sparse linear system, two directmethods for approximate factoring of the inverse are devised.The algorithms are fully parallelizable and appear to be morerobust than the iterative methods suggested for the task. Amethod to compute one of the matrix subspaces optimally isderived. These approaches generalize the approximate inversepreconditioning techniques in several natural ways.Marko HuhtanenDepartment of Electrical and Information EngineeringUniversity of [email protected]

Mikko [email protected]

Talk 3. Regularization of singular integral operators as apreconditioning strategyIn this talk we consider the singular integral equation arising inelectromagnetic scattering on inhomogeneous objects in freespace. To understand the apparent poor convergence of iterativemethods, such as GMRES, we analyze the spectrum of both theintegral operator and the system matrix. It turns out that theconvergence is slowed down by dense spectral clusters, whoseorigin and location may be traced to the nontrivial essentialspectrum of the integral operator. We propose a multiplicativeregularizer which reduces the extent of these clusters and inconjunction with the deflation of largest-magnitude eigenvaluesmakes for a good preconditioner.Neil BudkoNumerical Analysis, Delft Institute of Applied MathematicsDelft University of [email protected]

Grigorios ZourosSchool of Electrical and Computer EngineeringNational Technical University of [email protected]

Talk 4. High-order shifted Laplace preconditioners for waveequationsShifted Laplace preconditioners techniques have beenintroduced by Erlangga, Vuik and Oosterlee (Applied NumericalMathematics, 2004) for solving PDEs problems related towave-like equations. The aim of this talk is to propose ageneralization of shifted Laplace preconditioning methods byusing operator representation combined with complex Padeapproximants. We will show that the resulting high-order shiftedLaplace preconditioners are highly efficient and robust for two-and three-dimensional scattering problems that exhibit complexgeometrical features (e.g. resonant structures). Furthermore, theconvergence is proved to be weakly frequency dependent.Xavier AntoineInstitut Elie Cartan de Nancy

University of [email protected]

Christophe GeuzaineInstitut MontefioreUniversity of [email protected]

Ibrahim ZangreInstitut Elie Cartan de NancyUniversity of [email protected]

MS 17. Markov ChainsTalk 1. Markov Chain properties in terms of column sums ofthe transition matrixQuestions are posed regarding the influence that the columnsums of the transition probabilities of a stochastic matrix (withrow sums all one) have on the stationary distribution, the meanfirst passage times and the Kemeny constant of the associatedirreducible discrete time Markov chain. Some new relationships,including some inequalities, and partial answers to the questions,are given using a special generalized matrix inverse that has notpreviously been considered in the literature on Markov chains.Jeffrey J. HunterSchool of Computing and Mathematical SciencesAuckland University of Technology, Auckland, New [email protected]

Talk 2. Hamiltonian cycle problem and Markov chainsWe consider the famous Hamiltonian cycle problem (HCP)embedded in a Markov decision process (MDP). Morespecifically, we consider the HCP as an optimization problemover the space of either stationary policies, or of occupationalmeasures induced by these stationary policies. This approachhas led to a number of alternative formulations and algorithmicapproaches involving researchers from a number of countries.These formulations exploit properties of a range of matrices thatusually accompany investigations of Markov chains. It will beshown that when these matrices are induced by the given graphsome of the graph’s underlying structures can be detected intheir matrix analytic properties.Jerzy FilarSchool of Computer Science, Engineering and MathematicsFlinders University, Bedford Park, SA, [email protected]

Talk 3. Inequalities for functions of transition matricesConsider an irreducible finite-state Markov chain on N states. Itis known that the N by N matrix vanishing on the diagonal andwhich is equal to the mean first passage matrix elsewhere, isinvertible. We prove that if N is greater than two, the diagonalentries of the inverse are negative, and obtain some relatedinequalities. Analogous results for a closely related matrix arederived as well. Joint with Michael Neumann and OlgaPryporova.Iddo Ben-AriDepartment of MathematicsUniversity of Connecticut, Storrs, CT, United States of [email protected]

Talk 4. Compartmental systems and computation of theirstationary probability vectorsTo compute stationary probability vectors of Markov chainswhose transition matrices are cyclic of index p may be a difficulttask if p becomes large. A class of iterativeaggregation/disaggregation methods (IAD) is proposed to

Page 25: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

25 2012 SIAM Conference on Applied Linear Algebra

overcome the difficulty. It is shown that the rate of convergenceof the proposed IAD processes is governed by the maximalmodulus of the eigenvalues laying out of the peripheral spectrumof the smoothing matrix. The examined generators of Markovchains come from compartmental systems and cause that thetransition matrices under consideration may depend upon theappropriate stationary probability vectors. The nonlinearityrepresents further difficulties in computation.Ivo MarekSchool of Civil EngineeringCzech Institute of Technology, Praha, Czech [email protected]

MS 18. Preconditioning for PDE-constrainedoptimization - Part I of IITalk 1. Structural spectral properties of symmetric saddlepoint problemsSymmetric and indefinite block structured matrices often ariseafter the discretization of a large variety of application problems,where the block form stems from the presence of more than onepartial differential equation (PDE) in the problem, or from theimposition of some constraints, usually associated with PDEs.Structure-aware preconditioning strategies have emerged aswinning devises for efficiently and optimally solving theassociated large linear systems. In some relevant applications,the coefficient matrix shows a particular spectral symmetryaround the origin, which should be taken into account whenselecting both the iterative system solver and the mostappropriate preconditioner. In this talk we analyze in detail thissymmetry, and discuss its consequences in the solution of threemodel problems in optimal control, distributed time-harmonicparabolic control and distributed time-harmonic Stokes control.Valeria SimonciniDipartimento di MatematicaUniversita di Bologna, [email protected]

Wolfgang KrendlDoctoral Program Computational MathematicsJohannes Kepler University, Linz, [email protected]

Walter ZulehnerInstitute of Computational MathematicsJohannes Kepler University, 4040 Linz, [email protected]

Talk 2. Preconditioned iterative methods for Stokes andNavier-Stokes control problemsThe development of iterative solvers for PDE-constrainedoptimization problems is a subject of considerable recent interestin numerical analysis. In this talk, we consider the numericalsolution of two such problems, namely those of Stokes controland Navier-Stokes control. We describe the role of saddle pointtheory, mass matrix and Schur complement approximation,multigrid routines, and commutator arguments, in theconstruction of solvers for these problems. We also discussissues involved with regularization terms, and, in the case ofNavier-Stokes control, the outer iteration employed. We displaynumerical results that demonstrate the effectiveness of oursolvers.John Pearson

Mathematical InstituteUniversity of [email protected]

Talk 3. Preconditioners for elliptic optimal control problemswith inequality constraintsIn this talk we consider optimal control problems with an ellipticboundary value problem as state equation. Based on recentresults for distributed elliptic optimal control problems withoutinequality constraints we present and discuss preconditioners forthe optimality system in the presence of additional controlconstraints or state constraints. The focus is on the performanceof these preconditioners within a Krylov subspace method withrespect to model and discretization parameters.A possible extension of the presented approach to optimalcontrol problems with other state equations, like the steady-stateStokes equations, is discussed.Walter ZulehnerInstitute of Computational MathematicsJohannes Kepler University, Linz, [email protected]

Markus KollmannDoctoral Program Computational MathematicsJohannes Kepler University, Linz, [email protected]

Talk 4. Nearly optimal block preconditioners for blocktwo-by-two linear systemsFor a class of block two-by-two systems of linear equations, weconstruct block preconditioning matrices and discuss theeigen-properties of the corresponding preconditioned matrices.The block preconditioners can be employed to accelerate theconvergence rates of Krylov subspace iteration methods such asMINRES and GMRES. Numerical experiments show that theblock preconditioners are feasible and effective when used insolving the block two-by-two linear systems arising from thefinite-element discretizations of a class of PDE-constrainedoptimization problems.Zhong-Zhi BaiInstitute of Computational MathematicsAcademy of Mathematics and Systems ScienceChinese Academy of SciencesP. O. Box 2719, Beijing 100190, [email protected]

MS 19. Matrices and graphs - Part I of IITalk 1. (0,1) matrices and the analysis of social networksIn the sociology literature, an m× n (0, 1) matrix is called anactor–event matrix. In many examples, the matrix A is uniquelydetermined by AAT , ATA, and the fact that A is (0, 1). In thistalk, we present a result on pairs of actor–event matrices that aredistinct but ‘close’, and yield the same AAT and ATA.Specifically, using techniques from combinatorial matrix theory,we characterise all pairs of (0, 1) matrices A1, A2 such thatA1A

T1 = A2A

T2 , A

T1 A1 = AT2 A2, and the (0, 1,−1) matrix

A1−A2 has at most one 1 and one−1 in each row and column.

Steve KirklandHamilton InstituteNational University of Ireland [email protected]

Talk 2. Necessary and sufficient conditions for a HamiltoniangraphA graph is singular if the zero eigenvalue is in the spectrum of its0-1 adjacency matrix A. If an eigenvector belonging to the zeroeigenspace of A has no zero entries, then the singular graph issaid to be a core graph. A (κ, τ)-regular set is a subset of the

Page 26: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 26

vertices inducing a κ-regular subgraph such that every vertex notin the subset has τ neighbours in it. We consider the case whenκ = τ which relates to the eigenvalue zero under certainconditions. We show that if a regular graph has a (κ, κ)-regularset, then it is a core graph. By considering the walk matrix, wedevelop an algorithm to extract (κ, κ)-regular sets and formulatea necessary and sufficient condition for a graph to beHamiltonian.Irene ScirihaDept of Mathematics, Faculty of ScienceUniv. of [email protected]

Domingos Moreira CardosoDepartamento de MatemticaUniv. de [email protected]

Talk 3. On the eigenvalues of symmetric matrices associatedwith graphsLet G be a weighted graph of order n. It is known that λ1 ≥ d1and λ2 ≥ d3 where λi and di are the ith largest Laplacianeigenvalue and degree of G, respectively. For λ3 and smallereigenvalues we show that it is possible to construct a weightedgraph G of any order n such that λj < dn.To obtain these results we show that for every k ≥ 2 andn ≥ k + 1 there exists a weighted graph of order n whoseadjacency matrix has exactly k negative eigenvalues.This is done by obtaining some beautiful properties of the Schurcomplements of the adjacency matrix.Miriam FarberDepartment of [email protected]

Talk 4. An extension of the polytope of doubly stochasticmatricesWe consider a class of matrices whose row and column sumvectors are majorized by given vectors b and c, and whoseentries lie in the interval [0, 1]. This class generalizes the class ofdoubly stochastic matrices. We investigate the correspondingpolytope Ω(b|c) of such matrices. Main results include ageneralization of the Birkhoff - von Neumann theorem and acharacterization of the faces, including edges, of Ω(b|c).Richard A. BrualdiDepartment of MathematicsUniversity of [email protected]

Geir DahlCenter of Mathematics for ApplicationsDepartments of Mathematics and InformaticsUniversity of [email protected]

MS 20. Tensor based methods for high dimensionalproblems in scientific computing - Part I of IITalk 1. Optimal a priori tensor decomposition for the solutionof high dimensional problemsTensor-based methods are receiving a growing attention for theiruse in high dimensional applications in scientific computingwhere functions of multiple parameters have to beapproximated. These methods are based on the construction oftensor decompositions that provide approximate representationsof functions on adapted low dimensional reduced bases. Here,we propose algorithms that are able to directly construct an

approximation of optimal tensor decompositions with respect toa desired metric, without a priori information on the solution.Connections with optimal model reduction will be discussed.Anthony NouyLUNAM Universit, GeM, UMR CNRS 6183, Ecole Centrale de Nantes,Universit de Nantes, [email protected]

Talk 2. Proper generalized decomposition of multiscalemodelsIn this talk the coupling of a parabolic model with a system oflocal kinetic equations is analyzed. A space-time separatedrepresentation is proposed for the global model (this is simplythe radial approximation proposed by Pierre Ladeveze in theLATIN framework (Non-linear Computational StructuralMechanics. Springer: New York, 1999)). The originality of thepresent work concerns the treatment of the local problem, that isfirst globalized (in space and time) and then fully globalized byintroducing a new coordinate related to the different speciesinvolved in the kinetic model. Thanks to the non-incrementalnature of both discrete descriptions (the local and the global one)the coupling is quite simple and no special difficulties areencountered by using heterogeneous time integrationFrancisco ChinestaEcole Centrale NantesUniversite de Nantes, [email protected]

Amine AmmarArts et Metiers ParisTech, [email protected]

Elias CuetoInstitute of Engineering ResearchUniversidad de Zaragoza, [email protected]

Talk 3. Dynamic data-driven inverse identification indynamical systemsDynamic Data-Driven Application Systems DDDAS appear as anew paradigm in the field of applied sciences and engineering,and in particular in simulation-based engineering sciences. ByDDDAS we mean a set of techniques that allow the linkage ofsimulation tools with measurement devices for real-time controlof systems and processes. DDDAS entails the ability todynamically incorporate additional data into an executingapplication, and in reverse, the ability of an application todynamically steer the measurement process. DDDAS needs foraccurate and fast simulation tools making use if possible ofoff-line computations for limiting as much as possible theon-line computations. We could define efficient solvers byintroducing all the sources of variability as extra-coordinates inorder to solve off-line only once the model to obtain its mostgeneral solution to be then considered for on-line purposes.However, such models result defined in highly multidimensionalspaces suffering the so-called curse of dimensionality. Weproposed recently a technique, the Proper GeneralizedDecomposition -PGD-, able to circumvent the redoubtable curseof dimensionality. The marriage of DDDAS concepts and toolsand PGD “off-line” computations could open unimaginablepossibilities in the field of dynamics data- driven applicationsystems. In this work we explore some possibilities in thecontext of on-line inverse identification of dynamical systemsthat can be found in the control of industrial processes.Elias Cueto

Page 27: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

27 2012 SIAM Conference on Applied Linear Algebra

Institute of Engineering ResearchUniversidad de Zaragoza, [email protected]

Talk 4. Tensor approximation methods for parameteridentificationIn this talk we present methods for parameteridentification—here the coefficient fields of partial differentialequations—based on Bayesian procedures. As is well known,Bayes’s theorem for random variables (RVs) of finite variance isan orthogonal projection onto a space of functions generated bythe observations. Both the quantity to be identified as well as thespace to project onto are described as functions of known RVs.All this takes place in high-dimensional tensor product spaces.To limit the numerical effort, we use low-rank tensorapproximations for the forward as well as the inverse problem.Hermann G. MatthiesInstitute of Scientific ComputingTU [email protected]

Alexander LitvinenkoInstitute of Scientific ComputingTU [email protected]

Bojana RosicInstitute of Scientific ComputingTU [email protected]

Oliver PajonkInstitute of Scientific ComputingTU [email protected]

MS 21. Reducing communication in linear algebra -Part I of IITalk 1. Communication-optimal parallel algorithm forStrassen’s matrix multiplicationParallel matrix multiplication is one of the most studiedfundamental problems in distributed and high performancecomputing. We obtain a new parallel algorithm that is based onStrassen’s fast matrix multiplication and minimizescommunication. The algorithm outperforms all other parallelmatrix multiplication algorithms, classical and Strassen-based,both asymptotically and in practice.A critical bottleneck in parallelization of Strassen’s algorithm isthe communication between the processors. Ballard, Demmel,Holtz, and Schwartz (SPAA’11) provide lower bounds on thesecommunication costs, using expansion properties of theunderlying computation graph. Our algorithm matches theselower bounds, and so is communication-optimal. It has perfectstrong scaling within the maximum possible range. Ourparallelization approach generalizes to other fast matrixmultiplication algorithms.Oded SchwartzComputer Science DivisionUC [email protected]

Grey BallardComputer Science DivisionUC [email protected]

James DemmelComputer Science DivisionUC [email protected]

Olga HoltzMath DepartmentUC Berkeley and TU [email protected]

Benjamin LipshitzComputer Science DivisionUC [email protected]

Talk 2. A communication-avoiding symmetric-indefinitefactorizationWe describe a communication-avoiding symmetric indefinitefactorization. The factorization works in two phases. In the firstphase, we factor A = LTLT where L is unit lower triangularand T is banded with bandwidth Θ(

√M) where M is the size

of the cache. This phase uses a block version of Aasen’salgorithm. In the second phase, we factor T ; this can be doneefficiently using Kaufman’s retraction algorithm, or LU withpartial pivoting, or successive band reduction. We show that thealgorithm is asymptotically optimal in terms of communication,is work efficient (n3/3 + o(n3) flops), and is backward stable.Sivan ToledoSchool of Computer ScienceTel-Aviv [email protected]

Grey BallardComputer Science DivisionUC [email protected]

James DemmelComputer Science DivisionUC [email protected]

Alex DruinskySchool of Computer ScienceTel-Aviv [email protected]

Inon PeledSchool of Computer ScienceTel-Aviv [email protected]

Oded SchwartzComputer Science DivisionUC [email protected]

Talk 3. LU factorisation with panel rank revealing pivotingand its communication avoiding versionWe present the LU decomposition with panel rank revealingpivoting (LU PRRP), an LU factorization algorithm based onstrong rank revealing QR panel factorization. LU PRRP is morestable than Gaussian elimination with partial pivoting (GEPP).Our extensive numerical experiments show that the newfactorization scheme is as numerically stable as GEPP inpractice, but it is more resistant to pathological cases and easilysolves the Wilkinson matrix and the Foster matrix.We also present CALU PRRP, a communication avoidingversion of LU PRRP that minimizes communication.CALU PRRP is more stable than CALU, the communicationavoiding version of GEPP.Amal KhabouLaboratoire de Recherche en Informatique Universite Paris-Sud 11,INRIA Saclay- Ile de [email protected]

James W. DemmelComputer Science Division and Mathematics Department, UC Berkeley,

Page 28: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 28

CA 94720-1776, [email protected]

Laura GrigoriINRIA Saclay - Ile de France, Laboratoire e Recherche en InformatiqueUniversite Paris-Sud 11, [email protected]

Ming GuMathematics Department, UC Berkeley, CA 94720-1776, [email protected]

Talk 4. 2.5D algorithms for parallel dense linear algebraWe give a ‘2.5D’ parallelization scheme for dense linear algebracomputations. Our scheme exploits any extra available memoryadaptively. The scheme generalizes previous work on 2D and 3Dalgorithms in matrix multiplication. Further, we extend 2.5Dalgorithms to LU, Cholesky, and Cholesky-QR factorizations,triangular solve, and the all-pairs-shortest-path problem. Thesenew algorithms perform asymptotically less communication thanany previous counterparts. 2.5D algorithms are also practicaland map well to torus networks. We benchmark 2.5D matrixmultiplication and LU factorization on 65,536 cores of a BlueGene/P supercomputer. Our results demonstrate that memoryreplication improves strong scalability and overall efficiencysignificantly.Edgar SolomonikDept. of Compiter ScienceUniversity of California, [email protected]

James DemmelDept. of Compiter Science and MathematicsUniversity of California, [email protected]

MS 22. Linear algebra for inverse problems -Part I of IITalk 1. Block-extrapolation methods for linear matrixill-posed problemsIn this talk, we consider large-scale linear discrete ill-posedproblems where the right-hand side contains noise. In manyapplications such as in image restoration the coefficient matrix isgiven as a sum of Kronecker products of matrices and thenTikhonov regularization problem leads to a generalized linearmatrix equation. We define some block extrapolation methodsand show how they could be used in combination with thesingular value decomposition (SVD) to solve these problems. Inparticular, the proposed methods will be applied in imagerestoration. Some theoretical results and numerical tests inimage restoration are also given.Khalide JbilouLMPA, Universite du Littoral,50, rue F. Buisson, BP 699,62228 CALAIS-Cedex, [email protected]

Talk 2. Convergence properties of GMRES and RRGMESmethod for ill posed problemsRange restricted iterative methods (RRGMRES) based on theArnoldi process are attractive for the solution of largenonsymmetric linear discrete ill-posed problems witherror-contaminated data (right-hand side).GMRES is one of the most widely used iterative methods for thesolution of linear system of equations, with a large real orcomplex nonsingular matrix. Convergence properties ofGMRES are discussed by many authors.

We present some theoretical results for GMRES and RRGMRESand we give some numerical tests.Hassane SadokLMPA, Universite du Littoral,50, rue F. Buisson, BP 699,62228 CALAIS-Cedex, [email protected]

Lothar ReichelDepartment of Mathematical SciencesKent State [email protected]

Talk 3. Inverse problems for regularization matricesDiscrete ill-posed problems are difficult to solve, because theirsolution is very sensitive to errors in the data and to round-offerrors introduced during the solution process. Tikhonovregularization replaces the given discrete ill-posed problem by anearby penalized least-squares problem whose solution is lesssensitive to perturbations. The penalization term is defined by aregularization matrix, whose choice may affect the quality of thecomputed solution significantly. We describe several inversematrix problems whose solution yields regularization matricesadapted to the desired solution. Numerical examples illustratethe performance of the regularization matrices determined.Silvia NoscheseDipartimento di Matematica “Guido Castelnuovo”SAPIENZA Universita di [email protected]

Lothar ReichelDepartment of Mathematical SciencesKent State [email protected]

Talk 4. Meshless regularization for the numericalcomputation of the solution of steady Burgers-type equations

In this talk, we introduce a meshless method approximation forthe numerical solution of the Burgers-type steady equations. Thenumerical approximation to the exact solution is obtained fromthe exact values of the analytical solution on a finite set ofscattered data points in the interior of the domain. The exactvalues are in fact unkown and contamained by a noise.Regularization techniques are needed to obtain the numericalapproximation as the smoothing thin plate splines. This strategyleads to a nonlinear system which may be solved by using theTikhonov regularization method. The estimation of theregularization parameter is obtained by using the classical GCVor the L-curve criteria. We present some theoretical results andwe give some numerical tests.Abderrahman BouhamidiLMPA, Universite du Littoral,50, rue F. Buisson, BP 699,62228 CALAIS-Cedex, [email protected]

Mustapha HachedLMPA, Universite du Littoral,50, rue F. Buisson, BP 699,62228 CALAIS-Cedex, [email protected]

Khalide JbilouLMPA, Universite du Littoral,50, rue F. Buisson, BP 699,62228 CALAIS-Cedex, [email protected]

MS 23. Modern matrix methods for large scale data

Page 29: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

29 2012 SIAM Conference on Applied Linear Algebra

and networksTalk 1. Nonlinear eigenproblems in data analysis and graphpartitioningIt turns out that many problems in data analysis and beyond havenatural formulations as nonlinear eigenproblems. In this talk Ipresent our recent line of research in this area. This incluces theefficient computation of nonlinear eigenvectors via ageneralization of the inverse power method and a general resultshowing that a certain class of NP-hard combinatorial problemssuch as balanced graph cuts or the (constrained) maximumdensity subgraph have tight relaxations as nonlineareigenproblems.Matthias HeinFaculty of Mathematics and Computer ScienceSaarland [email protected]

Talk 2. LSRN: a parallel iterative solver for strongly over- orunder-determined systemsWe develop a parallel iterative least squares solver named LSRNthat is based on random normal projection. It computes theunique minimum-length solution to minx∈Rn ‖Ax− b‖2, whereA ∈ Rm×n can be rank-deficient with m n or m n. Acan be a dense matrix, a sparse matrix, or a linear operator, andLSRN speeds up automatically on sparse matrices and fastoperators. The preconditioning phase consists of a randomnormal projection, which is embarrassingly parallel, and asingular value decomposition of sizedγmin(m,n)e ×min(m,n), where γ is moderately larger than1, e.g., γ = 2. We show that the preconditioned system iswell-conditioned with a strong concentration result on theextreme singular values. Hence the number of iterations is fullypredictable if we apply LSQR or the Chebyshev semi-iterativemethod to the preconditioned system. The latter method isparticularly efficient for solving large-scale problems on clusterswith high communication cost. LSRN is also capable of handlingcertain types of Tikhonov regularization. Numerical results showthat LSRN outperforms LAPACK’s DGELSD on large-scaledense problems and MATLAB’s backslash (SuiteSparseQR) onsparse problems on a shared memory machine, and LSRN scaleswell on Amazon Elastic Compute Cloud (EC2) clusters.Xiangrui MengInstitute for Computational and Mathematical EngineeringStanford [email protected]

M. A. SaundersInstitute for Computational and Mathematical EngineeringStanford [email protected]

M. W. MahoneyStanford [email protected]

Talk 3. Solving large dense linear systems with covariancematricesGaussian processes are a fundamental tool in spatial/temporalstatistics, and they have broad applications to fields such asnuclear engineering and climate science. The maximumlikelihood approach for fitting a Gaussian model requires themanipulation of the covariance matrix through inversion,logarithmic action and trace computation, all of which posesignificant challenges for large dense matrices. We consider areformulation of the maximum likelihood through a stochastic

approximation framework, which narrows down the severallinear algebra challenges to solving a linear system with thecovariance matrix for multiple right-hand sides. In this talk I willpresent an iterative technique integrating the designs of thesolver, the preconditioner and the matrix-vector multiplications,which lead to a scalable method for solving the maximumlikelihood problem in data analysis.Jie ChenMathematics and Computer Science DivisionArgonne National [email protected]

Talk 4. Fast coordinate descent methods with variableselection for non-negative matrix factorizationNon-negative Matrix Factorization (NMF) is a dimensionreduction method for non-negative matrices, and has proven tobe useful in many areas, such as text mining, bioinformatics andimage processing. In this talk, I will present a new coordinatedescent algorithm for solving the least squares NMF problem.The algorithm uses a variable selection scheme that employs thegradient of the objective function. This new method isconsiderably faster in practice, especially when the solution issparse, as is often the case in real applications. In these cases,our method benefits by selecting important variables to updatemore often, thus resulting in higher speed. As an example, on atext dataset RCV1, our method is 7 times faster than FastHals,which is a cyclic coordinate descent algorithm, and more than 15times faster when the sparsity is increased by adding an L1penalty. We also develop new coordinate descent methods whenerror in NMF is measured by KL- divergence by applying theNewton method to solve the one-variable sub-problems.Experiments indicate that our algorithm for minimizing theKL-divergence is faster than the Lee & Seung multiplicative ruleby a factor of 10 on the CBCL image dataset.Inderjit S. DhillonDepartment of Computer ScienceThe University of Texas at [email protected]

Cho-Jui HsiehDepartment of Computer ScienceThe University of Texas at [email protected]

MS 24. Novel and synergetic algorithms for multicoreand multinode architectureTalk 1. Novel and synergetic linear algebra algorithms onmulticore and multinode architectureThe advent of parallel computers from multiple CPUs to, morerecently, multiple processor cores within a single CPU hascontinued to spur innovative linear algebra algorithms. Thisminisymposium aims to present a number of works that not onlyexhibit new algorithmic ideas suitable for these moderncomputer architectures, but also present interesting synergiesamong themselves in several levels. This talk gives an overviewon how a set of such algorithms work together and sets the stagefor the details to follow in each of the subsequent presentations.Ping Tak Peter TangIntel [email protected]

Talk 2. Towards hybrid factorization methods for solvinglarge sparse systemsThe availability of large-scale computing platforms comprised of

Page 30: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 30

thousands of multicore processors motivates the need for highlyscalable sparse linear system solvers. In this talk I will reviewsome recent work by researchers at Purdue University,University of Lugano, and Intel Corporation on the combineduse of direct and iterative methods for solving very large linearsystems of equations. We will present recent progress in the areaof parallel direct solvers and present a new parallel solver thatcombines the desirable characteristics of direct methods(robustness) and iterative solvers (computational efficiency),while alleviating their drawbacks (high memory requirements,lack of robustness).Olaf SchenkInstitute of Computational ScienceUniversity of [email protected]

Ahmed SamehDept. of Computer SciencePurdue [email protected]

Talk 3. Eigensolver-based reordering and parallel traceMINThe eigenvector corresponding to the second smallest eigenvalueof the Laplacian of a graph, known as the Fiedler vector, has anumber of applications in areas that include matrix reordering,graph partitioning, protein analysis, data mining, machinelearning, and web search. The computation of the Fiedler vectorhas been regarded as an expensive process as it involves solvinga large eigenvalue problem. We present an efficient parallelalgorithm for computing the Fiedler vector of large graphs basedon the Trace Minimization algorithm. We compare the parallelperformance of our method against one of the bestimplementations of a sequential multilevel scheme, designedspecifically for computing the Fiedler vector – routine MC73 ofthe Harwell Subroutine Library (HSL). In addition, we presentresults showing how such reordering: (i) enhances data localityfor sparse matrix-vector multiplication on parallel architectures,as well as (ii) helps in extracting effective and scalablepreconditioners.Murat ManguogluDepartment of Computer EngineeringMiddle East Technical University, 06800 Ankara, [email protected]

Faisal SaiedDepartment of Computer SciencePurdue University, West Lafayette, 47906 IN, [email protected], [email protected]

Ahmed H. SamehDepartment of Computer SciencePurdue University, West Lafayette, 47906 IN, [email protected], [email protected]

Talk 4. FEAST a density matrix based eigensolverThe FEAST algorithm takes its inspiration from the densitymatrix representation and contour integration technique inquantum mechanics. It combines simplicity and efficiency andoffers many important capabilities for achieving highperformance, robustness, accuracy, and scalability for solvinglarge sparse eigenvalue problems on parallel architectures.Starting from a random set of vectors, FEAST’s maincomputational tasks consist of solving a few complex-valuedindependent linear systems with multiple right-hand sides, andone reduced eigenvalue problem orders of magnitude smallerthan the original one. Accuracy can be systematically improvedthrough an iterative process that usually converges in two to

three iterations. In this talk, FEAST will first be presented as anouter-layer interface to any linear system solvers eithershared-memory or distributed such as PSPIKE. In addition, itwill be shown that a single FEAST iteration provides the bestsuitable eigen-subspace which can be used for initiating theTraceMIN algorithm.Eric PolizziDepartment of Electrical and Computer EngineeringUniversity of Massachusetts, Amherst, [email protected]

MS 25. Direction preserving and filtering methods forsolving sparse linear systemsTalk 1. Algebraic two-level domain decomposition methodsWe consider the solving of linear systems arising from porousmedia flow simulations with high heterogeneities. The parallelsolver is a Schwarz domain decomposition method. Theunknowns are partitioned with a criterion based on the entries ofthe input matrix. This leads to substantial gains compared to apartition based only on the adjacency graph of the matrix. Fromthe information generated during the solving of the first linearsystem, it is possible to build a coarse space for a two-leveldomain decomposition algorithm. We compare two coarsespaces: a classical approach and a new one adapted to parallelimplementation.Frederic NatafLaboratory J.L. LionsUniversity of Paris [email protected]

Pascal HaveIFPRueil [email protected]

Roland MassonLaboratoire J.A. DieudonneUniversity of [email protected]

Mikolaj [email protected]

Tao ZhaoLaboratory J.L. LionsUniversity of Paris [email protected]

Talk 2. Filtering solversIn this talk, we give an overview on filtering solvers. Thiscomprises frequency filtering and the smoothing correctionscheme, the tangential frequency filtering (TFFD) by Wagner,the two frequency decomposition by Buzdin et al. We thendiscuss the adaptive filtering method. The adaptive test vectoriterative method allows the combination of the tangentialfrequency decomposition and other iterative methods such asmulti-grid. We further discuss the Filtering Algebraic Multigrid(FAMG) method. Interface problems as well as problems withstochastically distributed properties are considered. Realisticnumerical experiments confirm the efficiency of the presentedalgorithms.Gabriel WittumG-CSC, [email protected]

Talk 3. Bootstrap algebraic multigrid

Page 31: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

31 2012 SIAM Conference on Applied Linear Algebra

By the time of its development Algebraic Multigrid (AMG) wasthought of as a black box solver for systems of linear equations.Yet, the classical formulation of AMG turned out to lack therobustness to overcome challenges encountered in many oftoday’s computational simulations which lead to thedevelopment of adaptive techniques in AMG methods.We present in this talk a Bootstrap approach to adaptive AMGintroducing the so-called “Least Squares Interpolation” (LSI)which allows to define interpolation operators solely based onprototypes of algebraically smooth error. Furthermore, weintroduce a “Bootstrap Setup” which enables us to computeaccurate LSI operators using a multigrid approach in the setup toefficiently compute the required prototypes of algebraicallysmooth error.We demonstrate the potential of the Bootstrap AMG approach inthe application to a variety of problems, each illustrating acertain aspect of the method.Karsten KahlDept. of Mathematics and Natural SciencesUniversity of [email protected]

James BrannickDept. of MathematicsPennsylvania State [email protected]

Talk 4. Block filtering decompositionWe describe a preconditioning technique that is suitable formatrices arising from the discretization of a system of PDEs onunstructured grids. The preconditioner satisfies a filteringproperty, which ensures that the input matrix is identical with thepreconditioner on a given filtering vector. This vector is chosento alleviate the effect of low frequency modes on convergence.We present a general approach that allows to ensure that thefiltering condition is satisfied in a matrix decomposition. Theinput matrix can have an arbitrary sparse structure, and can bereordered using nested dissection to allow a parallelimplementation.Laura GrigoriINRIA SaclayUniversity of Paris 11, [email protected]

Frederic NatafDept. of Applied MathematicsUniversity of Paris 6, [email protected]

Riadh FezzanniINRIA SaclayUniversity of Paris 11, [email protected]

MS 26. Advances in Krylov subspace methodsTalk 1. The new challenges to Krylov subspace methodsSolution methods based on preconditioned Krylov subspacemethods have reached a certain level of maturity and research inthis area is now mostly concerned with issues related topreconditioning. This state of equilibrium is currently beingchallenged by the emerging computational architectures. Forexample, general purpose GPUs put a high cost on innerproducts on which Krylov methods rely heavily. At the sametime the many-core environment will make standard Krylovmethod unreliable due to the potentially high number of possiblehardware failures when tens or hundreds of thousands of cores

are deployed. In this talk we will explore these challenges andsee what options are available to face them.Yousef SaadDept. of Computer Science and EngineeringUniversity of Minnesota, Twin Cities, [email protected]

Talk 2. Random shadow vectors in IDR(s): an explanation ofits GMRES-like convergenceThe IDR(s) method [Sonneveld and van Gijzen, SISC 31,1035-1062 (2008)], is a family of short-term recurrent Krylovsubspace solvers for large sparse, not necessarily symmetriclinear systems. For increasing s, the convergence behaviourshows an increasing similarity with full GMRES. The residualsof IDR(s) are, roughly speaking, orthogonalized with respect tos shadow vectors. Usually these are randomly chosen. Eachiteration step can be related to a Galerkin approximation in acorresponding Krylov subspace. In this talk we describe why thequality of this Galerkin approximation comes close to theoptimal (GMRES), if s is large enough.Peter SonneveldDelft Institute of Applied MathematicsDelft University of Technology, [email protected]

Talk 3. Truncated and inexact Krylov subspace methods forparabolic control problemsWe study the use of inexact and truncated Krylov subspacemethods for the solution of linear systems arising in thediscretized solution of optimal control of a parabolic partialdifferential equation. An all-at-once temporal discretization anda reduced Hessian approach are used. The solutions of the twolinear systems involved in this reduced Hessian can beapproximated, and in fact they can be less and less exact as theiterations progress. The option we propose is the use of theparareal-in-time algorithm for approximating the solution ofthese two linear systems. Truncated methods can be usedwithout much delay in convergence, but with important savingsin storage. Spectral bounds are provided and numericalexperiments are presented, illustrating the potential of theproposed methods.Daniel B. SzyldDept. of MathematicsTemple University, [email protected]

Xiuhong DuDept. of MathematicsAlfred University, [email protected]

Marcus SarkisDept. of MathematicsWorcester Polytechnic Institute, [email protected]

Christian E. SchaererDept. of Computer ScienceNational University of Asuncion, [email protected]

Talk 4. Convergence of iterative solution algorithms forleast-squares problemsWe consider the iterative solution of linear least-squaresproblems. We ask the following questions: which quantitiesshould be used to monitor convergence, and how can thesequantities be estimated at every iteration of commonly usedalgorithms? We argue that the backward error, appropriately

Page 32: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 32

scaled, is an excellent measure of convergence. We show howcertain projections of the residual vector can be used to bound orestimate it. We present theoretical results and numericalexperiments on the convergence of the backward error and itsestimates in the algorithms LSQR of Paige and Saunders andLSMR of Fong and Saunders.David Titley-PeloquinMathematical InstituteUniversity of Oxford, [email protected]

Serge GrattonENSEEIHT, [email protected]

Pavel JiranekCERFACS, [email protected]

MS 27. Preconditioning for PDE-constrainedoptimization - Part II of IITalk 1. On linear systems arising in Trust-region methodsTrust-region methods are widely used for the numerical solutionof nonlinear constrained and unconstrained optimizationproblems. Similar as line-search methods, they take turns inminimizing local models of the objective. In contrast toline-search methods, however, trust-region approaches do notrequire nor promote the positive definiteness of the local Hessianapproximations.In this presentation, we address issues arising in the context ofthe preconditioned approximate solution of these local models.The talk will highlight connections between algorithmicoptimization and linear algebra.Susann MachDept. of MathematicsUniversity of Technology [email protected]

Roland HerzogDept. of MathematicsUniversity of Technology [email protected]

Talk 2. Preconditioning for PDE-constrained optimizationusing proper orthogonal decompositionThe main effort of solving a PDE constrained optimizationproblem is devoted to solving the corresponding large scalelinear system, which is usually sparse and ill conditioned. As aresult, a suitable Krylov subspace solver is favourable, if aproper preconditioner is embedded.Other than the commonlyused block preconditioners, we exploit knowledge of properorthogonal decomposition (POD) for preconditioning andachieve some interesting features. Numerical results onnonlinear test problems are proposed.Ekkehard SachsDepartment of Mathematics University of Trier 54286 Trier, [email protected]

Xuancan YeDepartment of Mathematics University of Trier 54286 Trier, [email protected]

Talk 3. Preconditioning for Allen-Cahn variationalinequalities with non-local constraintsThe solution of Allen-Cahn variational inequalities with massconstraints is of interest in many applications. This problem canbe solved in its scalar and vector-valued form as aPDE-constrained optimization problem. At the heart of this

method lies the solution of linear systems in saddle point form.In this talk we propose the use of Krylov-subspace solvers andsuitable preconditioners for the saddle point systems. Numericalresults illustrate the competitiveness of this approach.Luise BlankDept. of MathematicsUniversity of [email protected]

Martin StollMax Planck Institute for Dynamics of Complex Technical [email protected]

Lavinia Sarbu

Talk 4. A one-shot approach to time-dependent PDE controlIn this talk, we motivate, derive and test effectivepreconditioners to be used with the MINRES algorithm forsolving a number of saddle point systems, which arise in PDEconstrained optimization problems. We consider the distributedcontrol and boundary control problems subject to partialdifferential equations such as the heat equation or Stokesequations. Crucial to the effectiveness of our preconditioners ineach case is an effective approximation of the Schur complementof the matrix system. In each case, we state the problem beingsolved, propose the preconditioning approach, and providenumerical results which demonstrate that our solvers areeffective for a wide range of regularization parameter values, aswell as mesh sizes and time-steps.Martin StollComputational Methods in Systems and Control TheoryMax Planck Institute for Dynamics of Complex Technical SystemsSandtorstr. 1, 39106 Magdeburg [email protected]

Andy WathenNumerical Analysis GroupMathematical Institute, 24–29 St Giles’, Oxford, OX1 3LB, [email protected]

John PearsonNumerical Analysis GroupMathematical Institute, 24–29 St Giles’, Oxford, OX1 3LB, [email protected]

MS 28. Matrices and graphs - Part II of IITalk 1. Parameters related to maximum nullity, zero forcingnumber, and tree-width of a graphTree-width, and variants that restrict the allowable treedecompositions, play an important role in the study of graphalgorithms and have application to computer science. The zeroforcing number is used to study the maximum nullity/minimumrank of the family of symmetric matrices described by a graph.Relationships between these parameters and several Colin deVerdiere type parameters, and numerous variations (includingthe minor monotone floors and ceilings of some of theseparameters) are discussed.Leslie HogbenDepartment of MathematicsIowa State [email protected]

Page 33: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

33 2012 SIAM Conference on Applied Linear Algebra

Francesco BarioliDepartment of MathematicsUniversity of Tennessee at [email protected]

Wayne BarrettDepartment of MathematicsBrigham Young [email protected]

Shaun M. FallatDepartment of Mathematics and StatisticsUniversity of [email protected]

H. Tracy HallDepartment of MathematicsBrigham Young [email protected]

Bryan ShaderDepartment of MathematicsUniversity of [email protected]

P. van den DriesscheDepartment of Mathematics and StatisticsUniversity of [email protected]

Hein van der HolstDepartment of Mathematics and StatisticsGeorgia State [email protected]

Talk 2. Colin de Verdiere numbers of chordal and split graphs

I will discuss the Colin de Verdiere number of a chordal graph,showing that it always takes one of two possible values. For theimportant subcase of split graphs, I will present a completedistinction between the two cases. Finally, I will show how todeduce from the result on chordal graphs an improvement offactor 2 to Colin de Verdiere’s well-known tree-width upperbound for the eponymous number. This work was done while theauthor was at Technion.Felix GoldbergHamilton InstituteNational University of Ireland [email protected]

Abraham BermanDepartment of [email protected]

Talk 3. Kochen-Specker sets and the rank-1 quantumchromatic numberThe quantum chromatic number of a graph G is sandwichedbetween its chromatic number and its clique number, which arewell known NP-hard quantities. We restrict our attention to therank-1 quantum chromatic number χ(1)

q (G), which upperbounds the quantum chromatic number, but is defined understronger constraints. We study its relation with the chromaticnumber χ(G) and the minimum dimension of orthogonalrepresentations ξ(G). It is known thatξ(G) ≤ χ(1)

q (G) ≤ χ(G). We answer three open questionsabout these relations: we give a necessary and sufficientcondition to have ξ(G) = χ

(1)q (G), we exhibit a class of graphs

such that ξ(G) < χ(1)q (G), and we give a necessary and

sufficient condition to have χ(1)q (G) < χ(G). Our main tools

are Kochen-Specker sets, collections of vectors with atraditionally important role in the study of noncontextuality ofphysical theories, and more recently in the quantification of

quantum zero-error capacities. Finally, as a corollary of ourresults and a result by Avis, Hasegawa, Kikuchi, and Sasaki onthe quantum chromatic number, we give a family ofKochen-Specker sets of growing dimension.Simone SeveriniDepartment of Computer Science and Department of Physics &Astronomy, University College [email protected]

Giannicola ScarpaCentrum Wiskunde & Informatica (CWI)[email protected]

Talk 4. On the null vectors of graphsEigenvalues and eigenvectors associated with graphs have beenimportant areas of research for many years. My plan for this talkit to revisit a basic, yet important, result by Fiedler on the inertiaof acyclic matrices and discuss its impact on some previousrelated work and highlight some interesting new implications oncertain null vectors associated with a graph.Shaun M. FallatDepartment of Mathematics and StatisticsUniversity of [email protected]

MS 29. Tensor based methods for high dimensionalproblems in scientific computing - Part II of IITalk 1. Sparse tensor approximation for uncertaintyquantificationUncertainty quantification (UQ) in complex models is typicallychallenged by the high-dimensionality of the input space. This istrue of both forward and inverse UQ problems, and particularlyso when the computational cost of a forward model evaluation ishigh. Despite the availability of efficient sparse-quadraturemethods, it is frequently necessary to pursue sparse, lowdimensional, representations of uncertain model outputs,retaining only important dependencies on the input space, inorder to render the UQ problem tractable. We discuss the use ofBayesian Compressed Sensing methods for discovering sparsepolynomial chaos representations in this context.Habib NajmCombustion Research Facility, Sandia National Laboratories, [email protected]

Talk 2. Analysis of greedy algorithms for high-dimensionalproblemsWe study a class of greedy algorithms for the solution of highdimensional partial differential equations. The idea is torepresent the solution as a sum of tensor products and tocompute iteratively the terms of this sum. Convergence resultsof this approach will be presented for the case of convexoptimization problems. We will also analyze algorithms for thesolution of nonsymmetric problems.Virginie ErlacherCERMICS - ENPC, [email protected]

Tony LelievreCERMICS - ENPC, [email protected]

Talk 3. Projected dynamical systems in tensor Banach spacesSome classes of projected dynamical systems in reflexive,strictly convex and smooth Banach spaces was introduced byGuiffre, Idone and Pia in J. Glob. Optim. (2008) 40:119–128.The main drawback to extend this concept in the framework of

Page 34: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 34

Tensor spaces, is the use of closed convex subsets in metricprojections. In this talk we discuss about the definition of aProjected Dynamical System on a Tensor Banach space using apath of idempotent operators.Antonio FalcoDept. de Economıa Financiera y ContabilidadUniversidad de Alicante, [email protected]

Talk 4. Algorithms for approximate inverse of operators forpreconditioning systems of equations in tensor formatWe here propose and analyze greedy algorithms for theapproximation of the inverse of an operator in tensor format.Algorithms are based on successive best approximations withrespect to non usual norms that makes possible thedecomposition of the inverse operator without any a prioriinformation. This approximate inverse is then used forpreconditioning iterative solvers and PGD algorithms for thesolution of high dimensional PDEs in tensor format. Theefficiency of the proposed preconditioner is illustrated onnumerical examples, where it is compared to otherpreconditioners.Loic GiraldiLUNAM Universite, GeM, UMR CNRS 6183, Ecole Centrale deNantes, Universite de Nantes, [email protected]

Anthony NouyLUNAM Universite, GeM, UMR CNRS 6183, Ecole Centrale deNantes, Universit de Nantes, [email protected]

Gregory LegrainLUNAM Universite, GeM, UMR CNRS 6183, Ecole Centrale deNantes, Universite de Nantes, [email protected]

MS 30. Reducing communication in linear algebra -Part II of IITalk 1. Communication-avoiding sparse matrix-matrixmultiplicationSparse matrix-matrix multiplication is a key primitive for manyhigh performance graph algorithms as well as some linearsolvers, such as algebraic multigrid. There is a significant gapbetween communication costs of existing algorithms and knownlower bounds. We present new 1D and 2.5D algorithms thatperform less communication than existing algorithms. Thesealgorithms have different memory requirements, and they scaledifferently with increasing matrix density and processor count.We also report on our performance results on large-scaleexperiments and our recent progress in obtaining tighter lowerbounds.Aydın BulucComputational Research DivisionLawrence Berkeley National [email protected]

Grey BallardComputer Science DivisionUC [email protected]

James DemmelComputer Science DivisionUC [email protected]

Laura Grigori

INRIA Saclay - Ile de France, Laboratoire e Recherche en InformatiqueUniversite Paris-Sud 11, [email protected]

Oded SchwartzComputer Science DivisionUC [email protected]

Talk 2. Improving the stability of communication-avoidingKrylov subspace methodsKrylov Subspace Methods (KSMs) are commonly used forsolving linear systems, eigenvalue problems, and singular valueproblems. Standard KSMs are communication-bound on moderncomputer architectures, due to required sparse matrix-vectormultiplication and projection operations in each iteration. Thismotivated s-step KSMs, which can use blocking strategies toincrease temporal locality, allowing an O(s) reduction incommunication cost. Despite attractive performance benefits,these variants are often considered impractical, as increasederror in finite precision can negatively affect stability. Wediscuss practical techniques for alleviating these problems ins-step methods while still achieving an asymptotic reduction incommunication cost.Erin CarsonDept. of Computer ScienceUniversity of California, [email protected]

Nicholas KnightDept. of Computer ScienceUniversity of California, [email protected]

James DemmelDept. of Computer Science, MathematicsUniversity of California, [email protected]

Talk 3. Hiding global synchronization latencies in Krylovmethods for systems of linear equationsIn Krylov methods, global synchronization due to reductionoperations for dot-products is becoming a bottleneck on parallelmachines. We adapt GMRES and CG such that this globalcommunication latency is completely overlapped by other localwork. This requires the global communication to be performedin a non-blocking or asynchronous way. To maintain stabilityeven at the strong scaling limit, different Krylov bases, likeNewton and Chebychev bases, can be used. Our performancemodel predicts large benefits for future exascale machines aswell as for current-scale applications such as solving the coarsestlevel of a multigrid hierarchy in parallel.Pieter GhyselsDept. of Math and Computer ScienceUniversity of Antwerp, [email protected]

Wim VanrooseDept. of Math and Computer ScienceUniversity of Antwerp, [email protected]

Talk 4. Avoiding communication with hierarchical matricesWe show how to reorganize the construction of a Krylov basis[x,Ax, . . . , Asx] to asymptotically reduce data movement,when A is a hierarchical matrix. Our approach extends theblocking covers algorithm of Leiserson, Rao, and Toledo, andrequires that off-diagonal blocks of A are low-rank. Thisapproach enables communication-avoiding s-step Krylovsubspace methods with hierarchical preconditioners. We also

Page 35: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

35 2012 SIAM Conference on Applied Linear Algebra

discuss extensions to multigrid and fast multipole methods.Nick KnightDept. of Electrical Engineering and Computer ScienceUniversity of California, [email protected]

Erin CarsonDept. of Electrical Engineering and Computer ScienceUniversity of California, [email protected]

James DemmelDept. of Electrical Engineering and Computer ScienceUniversity of California, [email protected]

MS 31. Linear Algebra for Inverse Problems -Part II of IITalk 1. Implicit filtering methods for inverse problemsIn this talk we consider a nonlinear least squares framework tosolve separable nonlinear ill-posed inverse problems. It is shownthat with proper constraints and well chosen regularizationparameters, it is possible to obtain an objective function that isfairly well behaved. Although uncertainties in the data andinaccuracies of linear solvers make it unlikely to obtain a smoothand convex objective function, it is shown that implicit filteringoptimization methods can be used to avoid becoming trapped inlocal minima. An application to blind deconvolution is used forillustration.James G. NagyDept. of Mathematics and Computer ScienceEmory [email protected]

Anastasia CornelioDept. of MathematicsUniversity of Modena and Reggio [email protected]

Elena Loli PiccolominiDepartment of MathematicsUniversity of [email protected]

Talk 2. Iterative reconstruction methods for adaptive opticsThe image quality of large ground based astronomical telescopessuffers from turbulences in the atmosphere. Adaptive Optics is ahardware-based technique that corrects the influence of theturbulence. To this end, wavefront measurements from theincoming wavefront of several guide stars are used for thereconstruction of the turbulent layers in the atmosphere. Thereconstructed atmosphere is then used to compute the surface ofseveral deformable mirrors such that, in the reflected light, theinfluence of the turbulence is removed. The reconstruction of theturbulence in the atmosphere is related to a limited angletomography problem and therefore severely ill posed. For thenew class of extremely large telescopes, the numerical task is toinvert a linear system with a dimension of approximately60.000× 10.000 every millisecond.We present iterative methods, based on Kaczmarz and cg, for thereconstruction of the layers. The methods will be evaluated indifferent function spaces. In particular, we will include themodeling of specific effects that are related to the use of laserguide stars. We will also demonstrate that our method are able tocompute the correcting shape of the deformable mirrors withinthe available time frame.Ronny Ramlau

Industrial Mathematics InstituteKepler University Linz, [email protected]

Talk 3. Approximated nonstationary iterated Tikhonov withapplication to image deblurringIn this talk we present new iterative regularization methods,based on the approximation of nonstationary iterated Tikhonov.In particular we investigate the image deblurring problem, wherethe blurring matrix is not easily invertible with a lowcomputational cost, while an approximation with such propertyis available. This is for instance the case of block Toeplitzmatrices with Toeplitz blocks that can be well approximated byblock circulant matrices with circulant blocks matrices whichcan be diagonalized by two dimensional fast Fourier transforms.Matrices arising from the imposition of other boundaryconditions can be considered as well.A detailed analysis is proposed in the stationary case and wediscuss relations with preconditioned Landweber method andother known methods. A large numerical experimentation withdifferent boundary conditions, blurring phenomena and noiselevel is presented.Marco DonatelliDept. of Scienza e Alta TecnologiaUniversity of [email protected]

Martin HankeInstitute of MathematicsUniversity of [email protected]

Talk 4. On the Richardson-Lucy method for imagerestorationImage deconvolution problems with a symmetric point-spreadfunction arise in many areas of science and engineering. Theseproblems often are solved by the Richardson- Lucy method, anonlinear iterative method. In this talk we first show aconvergence result for the Richardson-Lucy method. The proofsheds light on why the method may converge slowly.Subsequently, we describe an iterative active set method thatimposes the same constraints on the computed solution as theRichardson- Lucy method. Computed examples show the lattermethod to yield better restorations than the Richardson- Lucymethod with less computational effort.Fiorella SgallariDept. of Mathematics- CIRAMUniversity of Bologna,Bologna, [email protected]

M. Kazim KhanDept. of Mathematical Sciences,Kent State University, Kent, OHIO, [email protected]

Serena MorigiDept. of Mathematics- CIRAMUniversity of Bologna,Bologna, [email protected]

Lothar ReichelDept. of Mathematical Sciences,Kent State University, Kent, OHIO, [email protected]

MS 32. Orderings in sparse matrix computationTalk 1. Orderings and solvers for “non-uniform sparsematrices”

Page 36: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 36

Sparse matrices from non-PDE problems come in rich varieties.We specifically address sparse matrices that have a non-uniformdistribution of nonzeros per row (power-law graphs are a popularexample that lead to these), which may be called “non-uniformsparse matrices.” We examine the appropriateness of traditionalorderings to solve various problems for these types of matrices.For example, instead of RCM reordering for ILU(k), it is naturaland indeed better to use an ordering by increasing degree. Bystudying why such orderings are effective, we are lead to“incomplete” versions of minimum degree ordering, i.e., theordering operates like minimum degree, but the maximumdegree is capped by a parameter, just as fill-in level in ILU(k) iscapped. Lee, Raghavan, and Ng proposed such a technique forpivoting in 2006. In our studies, tie-breaking and thecomplementary strategies of reducing the number of fill-ins andthe aggregate size of fill-ins are important issues. Sparsematrices from PDEs have well-defined model problems; tofurther study non-uniform sparse matrices, we propose models(structure and values) derived from realistic applications.Edmond ChowSchool of Computational Science and EngineeringCollege of ComputingGeorgia Institute of [email protected]

Oguz KayaSchool of Computational Science and EngineeringCollege of ComputingGeorgia Institute of [email protected]

Talk 2. On hypergraph partitioning based ordering methodsfor sparse matrix factorizationWe discuss the use of hypergraph-based methods for orderingsof sparse matrices in Cholesky, LU and QR factorizations. Forthe Cholesky factorization case, we investigate a recent result onpattern-wise decomposition of sparse matrices, generalize theresult and develop effective ordering algorithms. Thegeneralized results help us formulate the ordering problem in LUas we do in the Cholesky case, without ever symmetrizing thegiven matrix. For the QR factorization case, the use ofhypergraph models is fairly standard; the method does notsymmetrize the given matrix. We present comparisons with themost common alternatives in all three cases.Bora UcarCNRS and ENS [email protected]

Iain S. DuffRAL Oxon, OX11 0QX, England andCERFACS, Toulouse, [email protected]

Johannes LangguthENS [email protected]

Talk 3. Orderings governed by numerical factorizationWe study the solution of sparse least-squares problems using anaugmented system approach:(

I AAT 0

)(rx

)=

(b0

)If the null space approach for constrained optimization is used, acrucial aspect is the selection of the basis rows from the

overdetermined matrix A. We discuss the effect of this showingthat the concept of condition number and conditioning needs tobe rethought in this case. We illustrate our discussion with runsusing a basis selection routine from HSL that involves a sparsefactorization with rook pivoting and a subsequent solution of theaugmented system using iterative methods.Iain S. DuffRutherford Appleton [email protected]

Mario ArioliRutherford Appleton [email protected]

Talk 4. Reordering sparse Cholesky factorization: minimumfill vs. minimum FLOP countGiven a sparse positive definite matrix A, we discuss theproblem of finding a permutation matrix P such that the numberof floating point operations for computing the Choleskyfactorization PAPT = LLT is minimum. Two theoreticalresults are presented: First, we outline a reduction fromMAXCUT in order to give an NP-hardness result. Second, wediscuss the relationship of minimizing FLOPs and the minimumfill problem. Using an explicit construction we show thatorderings which minimize the operation count may benon-optimal in terms of the fill and vice versa.Robert LuceDept. of MathematicsTechnical University of [email protected]

Esmond G. NgLawrence Berkeley National [email protected]

MS 33. Moving from multicore to manycore in appliedlinear algebraTalk 1. Parallel preconditioners and multigrid methods forsparse systems on GPUsLarge-scale numerical simulation relies on both efficient parallelsolution schemes and platform-optimized parallelimplementations. Due to the paradigm shift towards multicoreand manycore technologies, both aspects have become moreintricate. In this talk we address fine-grained parallelpreconditioning techniques and multigrid solvers that arecompliant with SIMD-like parallelism of graphics processingunits. By means of multi-coloring reordering combined with thepower(q)-pattern method for incomplete LU decompositionswith fill-ins we show how scalable parallelism can be introducedto the triangular solution phase of smoothers andpreconditioners. Performance results demonstrate efficiency andscalability of our approach on recent GPUs and onmulticore-CPUs.Jan-Philipp WeissEngineering Mathematics and Computing Lab (EMCL)Karlsruhe Institute of Technology (KIT), [email protected]

Dimitar LukarskiEngineering Mathematics and Computing Lab (EMCL)Karlsruhe Institute of Technology (KIT), [email protected]

Vincent HeuvelineEngineering Mathematics and Computing Lab (EMCL)Karlsruhe Institute of Technology (KIT), [email protected]

Page 37: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

37 2012 SIAM Conference on Applied Linear Algebra

Talk 2. Towards a GPU-accelerated direct sparse solverWe present a detailed study about the acceleration of a directmethod to solve sparse linear systems using GPU-basedcomputing. Particularly, we describe a variant for thefactorization stage of the multifrontal method which combinesCPU and GPU computations. Additionally, the developedroutine is included in the CSparse library to cover the wholesolver. We evaluate the proposal and compare it with an ad-hocmulticore implementation and the implementation included inMUMPS library (combined with Goto BLAS) The resultsobtained show that this is a promising research line to acceleratethis kind of sparse matrix solvers on cheap hardware platforms.Pablo EzzattiInstituto de ComputacionUniv. de la Republica, Montevideo, [email protected]

Alejandro ZinemanasInstituto de ComputacionUniv. de la Republica, Montevideo, [email protected]

Talk 3. Exploiting the flexibility of libflame for novelmulti-core and many-core architecturesThe libflame library is a modern, high-performance denselinear algebra library that is extensible, easy to use, and availableunder an open source license, and offers competitive (and inmany cases superior) real-world performance when compared totraditional libraries like LAPACK. It can optionally exploitmultiple cores by casting an algorithm as an algorithm-by-blockswhich can generate a directed acyclic graph (DAG) that is thenscheduled to cores via a runtime system called SuperMatrix.In this talk we demonstrate its flexibility for running efficientlydense linear algebra codes on general-purpose architectures,including single core, multi-core and GPU-based architectures.In addition, we adapt our runtime to support a novel, highlyefficient HPC architecture: the Texas Instrument’s C6678multi-core DSP.Francisco D. IgualDept. de Ingenierıa y Ciencia de los ComputadoresUniversity Jaume I de [email protected]

Murtaza AliTexas [email protected]

Robert A. van de GeijnThe University of Texas at [email protected]

Talk 4. High-performance genome studiesIn the context of the genome-wide association study (GWAS),one has to solve long sequences of generalized least-squaresproblems; such a problem presents two limiting factors:execution time –often in the range of days or weeks– and datamanagement –in the order of Terabytes–. We present algorithmsthat obviate both issues. By taking advantage of domain-specificknowledge, exploiting parallelism provided by multicores andGPU, and handling data effciently, our algorithms attainunequalled high performance. When compared to GenABEL,one of the most widely used libraries for GWAS, we obtainspeedups up to a factor of 50.Lucas BeyerAachen Institute for Advanced Study in Computational EngineeringScience (AICES)RWTH Aachen

[email protected]

Diego Fabregat-TraverAICESRWTH [email protected]

Paolo BientinesiAICESRWTH [email protected]

MS 34. Least squares methods and applicationsTalk 1. Block Gram–Schmidt algorithms withreorthogonalizationThe talk discusses reorthogonalized block classicalGram–Schmidt algorithms for factoring a matrix A intoA = QR where Q is left orthogonal (has orthonormal columns)and R is upper triangular. These algorithms are useful indeveloping BLAS–3 versions of orthogonal factorization in theimplementation of Krylov space methods and in modifying anexisting orthogonal decomposition.In previous work, we have made assumptions about the diagonalblocks of R to insure that a block classical Gram–Schmidtalgorithm with reorthogonalization will produce a backwardstable decomposition with a near left orthogonal Q. The contextof this talk is where these diagonal blocks violate thoseassumptions and are allowed to be singular or veryill-conditioned. A strategy for using rank–revealingdecompositions to deal with that contingency is given thatinsures a similar result to the case where the assumptions aboutthe blocks of R hold. The algorithm is considered in the contextof block downdating.Jesse BarlowDepartment of Computer Science and EngineeringPenn State UniversityUniversity Park, PA 16802, [email protected]

Talk 2. A numerical method for a mixed discrete bilinear leastsquares problemFor CPU utilization control in distributed real-time embeddedsystems, we recently proposed a new scheme to makesynchronous rate and frequency adjustment to enforce theutilization set point. In this scheme, we need to solve a mixeddiscrete bilinear least squares problem, in which one unknownvector is subject to a box constraint and the other unknownvector’s each entry has to be taken from a discrete set ofnumbers. In this talk we propose an alternating iterative methodto solve this problem.This is joint work with Xi Chen and Xue Liu.Xiao-Wen ChangSchool of Computer ScienceMcGill UniversityMontreal, Quebec H3A 1Y1, [email protected]

Talk 3. On condition numbers for constrained linear leastsquares problemsCondition numbers are important in numerical linear algebra,which can tell us the posterior error bounds for the computedsolution. Classical condition numbers are normwise, but theyignore the input data sparsity and/or scaling. Componentwiseanalysis have been introduced, which gives a powerful tool tostudy the perturbations on input and output data regarding the

Page 38: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 38

sparsity and scaling. In this paper under componentwiseperturbation analysis we will study the condition numbers forconstrained linear least squares problems. The obtainedexpressions of the condition numbers avoid the explicit formingof Kronecker products, which can be estimated by powermethods efficiently. Numerical examples show that ourcondition numbers can give better error bounds.Huaian DiaoSchool of Mathematics and StatisticsNortheast Normal UniversityChang Chun 130024P.R. [email protected]

Yimin WeiSchool of Mathematical SciencesFudan UniversityShanghai, 200433P.R. [email protected]

Talk 4. SOR inner-iteration GMRES for underdeterminedleast squares problemsSuccessive over-relaxation (SOR) inner iterations are proposedfor preconditioning the generalized minimal residual method(GMRES) for underdetermined least squares problems. Theright-preconditioned GMRES (AB-GMRES) may fail toconverge for inconsistent problems. Instead, theleft-preconditioned GMRES method (BA-GMRES) works sincewe can transform an inconsistent system to a consistent one.Numerical experiments show that BA-GMRES with SOR for thenormal equation works efficiently. Moreover, we showconditions under which BA-GMRES determines aminimum-norm least squares solution without breakdown andpresent numerical results.Keiichi MorikuniThe Department of InformaticsThe School of Multidisciplinary ScienceThe Graduate University for Advanced Studies2-1-2 Hitotsubashi, Chiyoda-kuTokyo [email protected]

Ken HayamiThe Department of InformaticsThe School of Multidisciplinary ScienceThe Graduate University for Advanced Studies2-1-2 Hitotsubashi, Chiyoda-kuTokyo [email protected]

MS 35. Nonlinear eigenvalue problemsTalk 1. Computable error bounds for nonlinear eigenvalueproblems allowing for a minimax characterizationFor a nonlinear eigenvalue problem

T (λ)x = 0

allowing for a variational characterization of its eigenvalues wediscuss computable error bounds of Krylov–Bogoliubov and ofKato–Temple type.Heinrich VossInstitute of Numerical SimulationHamburg University of [email protected]

Kemal YildiztekinInstitute of Numerical SimulationHamburg University of [email protected]

Talk 2. A restarting technique for the infinite Arnoldi method

Different adaptions of the Arnoldi method are often used tocompute partial Schur factorizations. We propose here atechnique to compute a partial Schur factorization of a nonlineareigenvalue problem (NEP). The technique is inspired by thealgorithm in [E. Jarlebring, K. Meerbergen, W. Michiels, Alinear eigenvalue algorithm for the nonlinear eigenvalueproblem, 2012], now called the infinite Arnoldi method, forwhich we design an appropriate restarting technique. Thetechnique is based on a characterization of the invariant pairs ofthe NEP, which turn out to be equivalent to the invariant pairs ofan operator. We characterize the structure of the invariant pairsof the operator and show how one can carry out a modificationof the infinite Arnoldi method by respecting this structure. Thisalso allows us to naturally add the feature known as locking. Wenest this algorithm with an outer iteration, where the infiniteArnoldi method for a particular type of structured functions isappropriately restarted. The restarting exploits the structure andis inspired by the well-known implicitly restarted Arnoldimethod for standard eigenvalue problems.Elias JarlebringCSC / Numerical analysisKTH - Royal institute of [email protected]

Karl MeerbergenDept. of Comp. [email protected]

Wim MichielsDept. of Comp. [email protected]

Talk 3. Robust successive computation of eigenpairs fornonlinear eigenvalue problemsThe successive computation of several eigenpairs of a nonlineareigenvalue problem requires a means to prevent our algorithmfrom repeatedly converging to the same eigenpair. For lineareigenproblems, this is usually accomplished by restricting theproblem to the subspace orthogonal to all previously computedeigenvectors. In the nonlinear case, though, this strategy isharmful as it may cause eigenpairs to be missed due to possiblelinear dependencies among the eigenvectors. In this talk, wepresent a modified orthogonalization scheme based on theconcept of minimal invariant pairs, which does not bear thisdanger.Cedric EffenbergerMathematics Institute of Computational Science and EngineeringSwiss Federal Institute of Technology [email protected]

Talk 4. Triangularization of matrix polynomialsUsing similarity transformations that preserve the structure ofthe companion matrix, we show that any matrix polynomialP (λ) over the complex numbers can be triangularized in thesense that there always exists a triangular matrix polynomialT (λ) having the same degree and elementary divisors (finite andinfinite) as P (λ) . Although the result is theoretical the approachis constructive and hints on possible numerical algorithms.

Page 39: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

39 2012 SIAM Conference on Applied Linear Algebra

Leo TaslamanSchool of MathematicsUniversity of [email protected]

Yuji NakatsukasaSchool of MathematicsUniversity of [email protected]

Francoise TisseurSchool of MathematicsUniversity of [email protected]

MS 36. Hybrid solvers for sparse linear equationsTalk 1. The augmented block-Cimmino distributed methodIn row projection methods, such as Block-Cimmino, near lineardependency between partitions implies small eigenvalues in thespectrum of the iteration matrix. In this work we try to break thisnear linear dependency by augmenting the matrix to enforceorthogonality between partitions. We formulate a linear systemin a super-space, where the extra equations that are introducedfor consistency are obtained by projections and can be handledimplicitly. In the ideal case the resulting iteration matrix has alleigenvalues almost equal to one. We investigate the numericalcomponents of this approach and ways to reduce the number ofsuper-space variables.Mohamed [email protected]

Iain DuffCERFACS and Rutherford Appleton [email protected]

Ronan [email protected]

Daniel [email protected]

Talk 2. On a parallel hierarchical algebraic domaindecomposition method for a large scale sparse linear solverThe emerging petaflop computers have processing nodes basedon multi- /many-core chips. To fully utilize such parallelarchitectures, a natural approach is to exploit medium-grainparallelism through multi-threading within a node andcoarse-grain parallelism using message-passing (MPI) betweennodes. For a hybrid direct/iterative solver, this can beimplemented by using a parallel sparse direct solver within anode on local subproblems. The available multi-threaded sparsedirect solvers can be used to take advantage of the multicorearchitecture of the nodes. The MPI paradigm is then used toimplement the iterative scheme for the interfaces between thesubproblems. In this talk, we will present such a parallelimplementation and illustrate its performance scalability on a setof academic and industrial test problems. In addition, we willpropose a model to compute the subsequent computational andmemory costs, depending on the ratio of computation performedin the direct solver. We apply this model to several classes ofproblems and illustrate the possible trade-offs in terms ofcomputational and memory complexity.Luc GiraudInria, joint Inria-CERFACS [email protected]

Emmanuel Agullo

[email protected]

Abdou GuermoucheBordeaux [email protected]

Jean [email protected]

Talk 3. A two-level Schwarz method for systems with highcontrastsTwo-level Schwarz methods are popular domain decompositionmethods. They are based on direct solves in each subdomain aswell as in a specified coarse space. Used as a preconditioner forthe Conjugate Gradient algorithm they lead to efficient hybridsolvers.Heterogeneous coefficients in the PDEs are a known challengefor robustness in domain decomposition, specially if theheterogeneities are across interfaces. We introduce a new coarsespace for which the Schwarz method is robust even in this case.It relies on solving local generalized eigenvalue problems in theoverlaps between subdomains which isolate exactly the termsresponsible for slow convergence.Nicole SpillaneLaboratoire Jacques-Louis LionsUniversite Pierre et Marie [email protected]

Victorita DoleanLaboratoire J.-A. DieudonneUniversite de Nice-Sophia [email protected]

Patrice HauretCentre de technologieMFP [email protected]

Frederic NatafLaboratoire Jacques-Louis LionsUniversite Pierre et Marie [email protected]

Clemens PechsteinInstitute of Computational MathematicsJohannes Kepler [email protected]

Robert ScheichlDepartment of Mathematical SciencesUniversity of [email protected]

Talk 4. A 3-level parallel hybrid preconditioner for sparselinear systemsWe describe ShyLU, a hybrid direct-iterative preconditioner andsolver for general sparse linear systems. We use a Schurcomplement approach where subproblems are solved using adirect solver while the Schur complement is solved iteratively.We employ a block decomposition, which is formed in a fullyautomatic way using state-of-the-art partitioning and orderingalgorithms in the Zoltan and Scotch software packages. Themethod can be viewed as a two-level scheme. We show resultsthat demonstrate the robustness and effectiveness of ShyLU forproblems from a variety of applications. We have found it isparticularly effective on circuit simulation problems where manyiterative methods fail. We also show results from a hybrid MPI +threads version of ShyLU. The Schur complement strategy inShyLU works well for moderate amount of parallelism. For verylarge problems on very large parallel systems, we introduce athird level based on domain decomposition to improve

Page 40: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 40

scalability. We study how ShyLU behaves as a subdomainsolver. This strategy looks promising for peta- and exa-scalesystems, which usually have a hierarchical structure.Erik G. BomanSandia National [email protected]

Siva RajamanickamSandia National [email protected]

Michael A. HerouxSandia National [email protected]

Radu PopescuEPFL, [email protected]

MS 37. Optimization methods for tensordecompositionTalk 1. Efficient algorithms for tensor decompositionsThe canonical polyadic and rank-(Lr, Lr, 1) block termdecomposition (CPD and BTD, respectively) are two closelyrelated tensor decompositions. The CPD is an important tool inpsychometrics, chemometrics, neuroscience and data mining,while the rank-(Lr, Lr, 1) BTD is an emerging decompositionin signal processing and, recently, blind source separation. Wepresent a decomposition that generalizes these two and developalgorithms for its computation. Among these algorithms arealternating least squares schemes, several general unconstrainedoptimization techniques, as well as matrix-free nonlinear leastsquares methods. In the latter we exploit the structure of theJacobian’s Gramian by means of efficient expressions for itsmatrix-vector product. Combined with an effectivepreconditioner, numerical experiments confirm that thesemethods are among the most efficient and robust currentlyavailable for computing the CPD, rank-(Lr, Lr, 1) BTD andtheir generalized decomposition.Laurent SorberDept. of Computer Science KU [email protected]

Marc Van BarelDept. of Computer Science KU [email protected]

Lieven De LathauwerGroup Science, Engineering and Technology, KU Leuven CampusKortrijk& Dept. of Electrical Engineering (ESAT), KU [email protected]

Talk 2. Symmetric tensor decomposition via a power methodfor the generalized tensor eigenproblemWe present an approach for approximate (least-squares)decomposition of a symmetric positive-definite tensor ofdimension n into a given number p of symmetric rank-1 terms.This is accomplished in two parts. The first part is to transformthe decomposition problem into a ”generalized tensoreigenproblem” (GTEP) for tensors of dimension np. The GTEPis of independent interest and, for example, subsumes thepreviously defined Z-eigenpairs and H-eigenpairs. The secondpart is to extend a previously developed power method forZ-eigenpairs to solve the GTEP. We discuss the conceptsunderlying these techniques and provide numerical examples.Jackson R. MayoSandia National [email protected]

Tamara G. KoldaSandia National [email protected]

Talk 3. All-at-once optimization for coupled matrix andtensor factorizationsJoint analysis of data from multiple sources can enhanceknowledge discovery. The task of fusing data, however, ischallenging since data may be incomplete and heterogeneous,i.e., data consists of matrices and higher-order tensors. Weformulate this problem as a coupled matrix and tensorfactorization problem where heterogeneous data are modeled byfitting outer-product models to higher-order tensors and matricesin a coupled manner. Unlike traditional approaches usingalternating algorithms, we use a gradient-based all-at-onceoptimization approach. Using numerical experiments, wedemonstrate that the all-at-once approach is often more accuratethan the alternating approach and discuss the advantages ofcoupled analysis in terms of missing data recovery.Evrim AcarUniversity of [email protected]

Tamara G. KoldaSandia National [email protected]

Daniel M. DunlavySandia National [email protected]

Rasmus BroUniversity of [email protected]

Talk 4. An algebraic multigrid optimization method forlow-rank canonical tensor decompositionA new algorithm based on algebraic multigrid is presented forcomputing the rank-R canonical decomposition of a tensor forsmall R. Standard alternating least squares (ALS) is used as therelaxation method. Transfer operators and coarse-level tensorsare constructed in an adaptive setup phase that combinesmultiplicative correction and bootstrap algebraic multigrid. Anaccurate solution is computed by an additive solve phase basedon the full approximation scheme. Numerical tests show that forcertain test problems, our multilevel method significantlyoutperforms standalone ALS when a high level of accuracy isrequired.Killian MillerUniversity of [email protected]

Hans De SterckUniversity of [email protected]

MS 38. Generalized inverses and applications - Part Iof IITalk 1. The group inverse of additively modified matricesLet A ∈ Cn×n. We recall that the group inverse of A, is theunique matrix A# ∈ Cn×n, if it exists, which satisfies theequations: A#AA# = A#; AA#A = A; AA# = A#A. If Ais nonsingular then A# = A−1. It is well known that the groupinverse of A exists if rank(A) = rank(A2). Alternatively, A#

exists if A+ I −AA− is nonsingular, independently of thechoice of A−, where A− denotes an inner inverse of A, i.e.AA−A = A. Let B ∈ Cn×r , C ∈ Cr×n, and r < n. If A andI − CA−1B are nonsingular, then to adapt the inverse of the

Page 41: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

41 2012 SIAM Conference on Applied Linear Algebra

modified matrix A−BC we can apply theSherman-Morrison-Woodbury formula:(A−BC)−1 = A−1 +A−1B(I − CA−1B)−1CA−1; and,hence, we need to compute the inverse of I − CA−1B, whichhas size r < n. This talk will be focused on the group inverse ofadditively modified matrices, (A−BC)#, when it exists.Extensions of the previous formula will be considered. Anapplication to the perturbation of the group inverse ofA = I − T , where T is the transition matrix of a Markov chainwill be given. The research is partially supported by ProjectMTM2010-18057, “Ministerio de Ciencia e Innovacion” ofSpain.Nieves Castro GonzalezDepartamento de MatematicaUniversidad Politecnica de Madrid, [email protected]

Talk 2. The Moore-Penrose inverse of a linear combination ofcommuting generalized and hypergeneralized projectorsWe present new results concerning the representation of theMoore-Penrose inverse of a linear combination of generalizedand hypergeneralized projectors and give the form for theMoore-Penrose inverse, i.e., the group inverse of c1A+ c2B,where A, B are two commuting generalized or hypergeneralizedprojectors and c1; c2 ∈ C\0 and c31 + c32 6= 0. Furthermore,we show that under that conditions the invertibility ofc1A+ c2B is independent of the choice of the scalars c1; c2.Also, we present some results relating different matrix partialorderings and the invertibility of a linear combination of EPmatrices.Dragana S. Cvetkovic-IlicFaculty of Mathematics and SciencesUniversity of Nis, [email protected]

Talk 3. Generalized inverses of operators on HilbertC∗-modulesWe present new results on the theory and applications ofgeneralized inverses of operators between Hilbert C∗-modules.Dragan S. DjordjevicFaculty of Sciences and MathematicsUniversity of Nis, [email protected]

Talk 4. Some results on the reverse order lawWe presented some necessary and sufficient conditionsconcerning the reverse order laws for the group inverses ofelements in rings and rings with involution. In particular,assuming that a and b are Moore-Penrose invertible or groupinvertible elements, we study equivalent conditions which arerelated to the reverse order law for the group inverse of theproduct ab. Also, some equivalent condition which ensure thatthe product ab is EP element are consider too. Severalconditions for (ab)# = b#(a#abb#)#a# to hold in rings areinvestigate. We extend the recent results to more generalsettings, giving some new conditions and providing simplerproofs to already existing conditions.D. MosicFaculty of Sciences and MathematicsUniversity of Nis, [email protected]

MS 39. Challenges for the solution andpreconditioning of multiple linear systems - Part I of II

Talk 1. Preconditioners for sequences of shifted linear systems

Sequences of shifted linear systems with the same right-handside can be solved with multi-shift Krylov methods at almost thesame cost as for solving the unshifted system. This is possibledue to the fact that the Krylov subspaces for the shifted and theunshifted problems are the same. The main drawback ofmulti-shift Krylov methods is that they cannot be combined withan arbitrary preconditioner since in general the Krylov subspacesfor the preconditioned shifted problems will be different. In thetalk we will discuss this problem in detail and also explain howto construct preconditioners that can be combined withmulti-shift Krylov methods.Martin B. van GijzenDelft Institute of Applied MathematicsDelft University of Technology, the [email protected]

Daniel B. SzyldDepartment of MathematicsTemple University, Philadelphia, Pa., [email protected]

Talk 2. Solving sequences of linear systems with application tomodel reductionTo obtain efficient solutions for sequences of linear systemsarising in interpolatory model order reduction (both parametricand nonparametric), we recycle Krylov subspaces from onesystem (or pair of systems) in a sequence to the next. We firstintroduce a generalization of BiCG. Here we show that even fornon-Hermitian matrices one can build bi-orthogonal bases (forthe associated two Krylov subspaces) using a short-termrecurrence. We then adapt and apply Recycling BiCG andRecycling BiCGSTAB to model reduction algorithms. For amodel reduction problem we demonstrate that solving theproblem without recycling leads to (about) a 50% increase inruntime.Kapil AhujaMax Planck Institute for Dynamics of Complex Technical SystemsMagdeburg, [email protected]

Eric de SturlerDept. of MathematicsVirginia Tech, Blacksburg, Va., [email protected]

Serkan GugercinDept. of MathematicsVirginia Tech, Blacksburg, Va., [email protected]

Peter BennerMax Planck Institute for Dynamics of Complex Technical SystemsMagdeburg, [email protected]

Talk 3. Krylov subspace recycling for families of shifted linearsystemsWe address the solution of a sequence of families of linearsystems. For each family, there is a base coefficient matrix Ai,and the coefficient matrices for all systems in the family differfrom Ai by a multiple of the identity, e.g.,

Aixi = bi and (Ai + σ(`)i I)x

(`)i = bi for ` = 1 . . . Li,

where Li is the number of shifts at step i. We propose a newmethod which solves the base system using GMRES withsubspace recycling while constructing approximate corrections

Page 42: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 42

to the solutions of the shifted systems. At convergence of thebase system solution, GMRES with subspace recycling isapplied to further improve the solutions of the shifted systems totolerance. We present analysis of this method and numericalresults involving systems arising in lattice quantumchromodynamics.Kirk M. SoodhalterDepartment of MathematicsTemple University, Philadelphia, Pa., [email protected]

Daniel B. SzyldDepartment of MathematicsTemple University, Philadelphia, Pa., [email protected]

Fei XueDepartment of MathematicsTemple University, Philadelphia, Pa., [email protected]

Talk 4. Krylov subspace recycling for faster model reductionalgorithmsThe numerical solution of shifted linear system with multipleright-hand sides is a recurring task in many computationalproblems related to model reduction. Shifts and right-hand sidesmay be known in advance or may became available only afterthe previous linear system has been solved. We will survey somemodel reduction algorithms where these tasks occur and willdemonstrate that Krylov subspace recycling may significantlyaccelerate the computation of reduced-order models in thesesituations.Peter BennerMax Planck Institute for Dynamics of Complex Technical SystemsMagdeburg, [email protected]

Lihong FengMax Planck Institute for Dynamics of Complex Technical SystemsMagdeburg, [email protected]

MS 40. Different perspectives on conditioning andnumerical stability - Part I of IITalk 1. Highly accurate numerical linear algebra via rankrevealing decompositionsHigh accuracy algorithms are those that produce relative forwarderrors of order the unit roundoff of the computer even formatrices that are very ill-conditioned in a traditional sense. Thistype of algorithms only exist for certain structured matrices. Awide class of these matrices are the matrices for which it ispossible to compute accurate rank revealing decompositions(RRDs), i.e., factorizations XDY where D is diagonal andnon-singular, and X and Y are well conditioned. This classcomprise many important structured matrices, likeVandermonde, Cauchy, graded matrices and many others.Originally, high accuracy algorithms acting on the factors ofRRDs (instead of acting directly on the matrix) were designedfor computing Singular Value Decompositions (1999), then forcomputing eigenvalues/eigenvectors of symmetric matrices(2003, 2009), and very recently for computing solutions of linearsystems (2011) and least square problems (2012). The purposeof this talk is to present a unified description of all thesealgorithms and to pose some open problems in this area.Froilan M. DopicoICMAT and Department of Mathematics

Universidad Carlos III de Madrid (Spain)[email protected]

Talk 2. Stability of numerical algorithms with quasiseparablematricesQuasiseparable matrices are encountered in PDEs, computationswith polynomials (interpolation, root finding etc.), design ofdigital filters and other areas of applied mathematics. There havebeen major interest in fast algorithms with quasiseparablematrices in the last decade. Many linear complexity algorithmshave been developed including QR factorization, system solver,QR iterations and others. However, their numerical stability hasnot received equal attention of the scientific community. In ourtalk we will present first results of error analysis of QRdecomposition and system solvers. We will also present ageneralization of Parlett’s dqds algorithm and discussperspectives of using quasiseparable matrices for stableevaluation of polynomials’ roots.Pavel ZhlobichSchool of MathematicsUniversity of [email protected]

Froilan DopicoDepartamento de MatematicasUniversidad Carlos III de [email protected]

Vadim OlshevskyDepartment of MathematicsUniversity of [email protected]

Talk 3. Gram-Schmidt orthogonalization with standard andnon-standard inner product: rounding error analysisIn this contribution we consider the most important schemesused for orthogonalization with respect to the standard andnon-standard inner product and review the main results on theirbehavior in finite precision arithmetic. Although all the schemesare mathematically equivalent, their numerical behavior can besignificantly different. We treat separately the particular case ofthe standard inner product and show that similar results hold alsofor the case when the inner product is induced by a positivediagonal matrix. We will show that in the case of general innerproduct the orthogonality between computed vectors besides thelinear independence of initial vectors depends also on thecondition number of the matrix that induces the non-standardinner product. Finally we discuss the extension of this theory tosome bilinear forms used in the context of variousstructure-preserving transformations.Miroslav RozloznıkInstitute of Computer Science, Czech Academy of SciencesPrague, Czech [email protected]

Jirı KopalInstitute of Novel Technologies and Applied InformaticsTechnical University of Liberec, Czech [email protected]

Alicja SmoktunowiczFaculty of Mathematics and Information ScienceWarsaw University of Technology, [email protected]

Miroslav TumaInstitute of Computer Science, Czech Academy of SciencesPrague, Czech [email protected]

Talk 4. Backward stability of iterations for computing the

Page 43: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

43 2012 SIAM Conference on Applied Linear Algebra

polar decompositionAmong the many iterations available for computing the polardecomposition the most practically useful are the scaled Newtoniteration and the recently proposed dynamically weighted Halleyiteration. Effective ways to scale these and other iterations areknown, but their numerical stability is much less wellunderstood. In this work we show that a general iterationXk+1 = f(Xk) for computing the unitary polar factor isbackward stable under two conditions. The first conditionrequires that the iteration is implemented in a mixedbackward–forward stable manner and the second requires thatthe mapping f does not significantly decrease the size of anysingular value relative to the largest singular value. Using thisresult we show that the dynamically weighted Halley iteration isbackward stable when it is implemented using Householder QRfactorization with column pivoting and either row pivoting orrow sorting. We also prove the backward stability of the scaledNewton iteration under the assumption that matrix inverses arecomputed in a mixed backward–forward stable fashion; ourproof is much shorter than a previous one of Kiełbasinski andZietak. We also use our analysis to explain the instability of theinverse Newton iteration and to show that the Newton–Schulziteration is only conditionally stable. This work shows that bycarefully blending perturbation analysis with rounding erroranalysis it is possible to produce a general result that can provethe backward stability or predict or explain the instability (as thecase may be) of a wide range of practically interesting iterationsfor the polar decomposition.Nicholas J. Higham

School of MathematicsUniversity of [email protected]

Yuji NakatsukasaSchool of MathematicsUniversity of [email protected]

MS 41. Recent advances in model reduction - Part I ofIITalk 1. The Loewner framework in data-driven modelreductionIn this talk we will survey recent advances in model reductionconcentrating on the method of model reduction directly fromdata using the Loewner framework. In addition we will showhow the Loewner framework can be extended to handlereduction of systems depending on parameters.A.C. AntoulasRice [email protected]

Talk 2. Robust computational approaches toH2-optimalmodel reductionWe present an approach to model reduction for MIMO lineardynamical systems that is numerically stable, nonintrusive, andcomputationally tractable even for very large order systems. Ourstrategy is a hybrid of the Iterative Rational Krylov Algorithm(IRKA) and a trust-region method with logarithmic barrier thatguarantees stability. The method produces a monotonicallyimproving (with respect toH2 error) sequence of stable reducedorder models that is rapidly and globally convergent. Theeffectiveness of the algorithm is illustrated through numerical

examples drawn from a variety of challenging dynamicalsystems that arise in practical settings.Christopher BeattieDept. of MathematicsVirginia Polytechnic Institute and State [email protected]

Serkan GugercinDept. of MathematicsVirginia Polytechnic Institute and State [email protected]

Talk 3. Reduced order modeling via framesIn this talk we present a new concept that arises in thediscretization of PDEs. Instead of using bases (like hierarchicalfinite elements) to represent the solution in a certain functionspace, we use frames, which consist of a set of standard basisfunctions enriched by specific functions that allow to addressspecific solution behavior like singularities, boundary layers orsmall amplitude, high frequency oscillations. We thendetermined sparse solutions represented in these frames, bysolving the under-determined problems to compute sparsesolutions. We show that by doing this in a multilevel approachwe can achieve much smaller models than with classicaladaptive FEM.Volker MehrmannInst. f. MathematikTU [email protected]

Sadegh JokarInst. f. Mathematik TU [email protected]

Sarosh QuraischiInst. f. Mathematik TU [email protected]

Talk 4. Semidefinite Hankel-type model reduction based onfrequency response matchingIn this talk a model reduction method for linear time-invariantsystems is discussed. It is based on frequency response matchingand semidefinite programming techniques. The method isrelated to Hankel model reduction. It can be claimed that thepresented method is a scalable approach to Hankel modelreduction. Numerical simulations show that the accuracy of themethod is comparable to the well-known techniques, such asbalanced truncation and Hankel model reduction. The developedmethod can be applied to various problems. In this talk we aregoing to briefly discuss two: parameterized model reduction andreduction in the nu-gap metric.Aivar SootlaDept. of Automatic ControlLund [email protected]

Anders RantzerDept. of Automatic ControlLund [email protected]

Kin Cheong SouKTH School of Electrical Engineering, Automatic Control,[email protected]

MS 42. Structured solution of nonlinear matrixequations and applications - Part I of IITalk 1. Structured solution of large-scale algebraic Riccatiand nonlinear matrix equations

Page 44: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 44

We consider the solution of large-scale algebraic Riccati andnonlinear matrix equations (AREs, NMEs). For discrete-timeAREs, the structure-preserving doubling algorithm will beadapted for the corresponding sparsity and low-rankedstructures. For continuous-time AREs, the Cayley transform isapplied before doubling. Similar methodology is applied tolarge-scale nonsymmetric AREs and NMEs. With n being thedimension of the equations, the resulting algorithms are of O(n)complexity. Numerical results will be presented. As an example,a DARE with 3.19 billion unknowns was solved using MATLABon a MacBook Pro to machine accuracy within 1,100 seconds.Eric King-wah ChuSchool of Mathematical SciencesBuilding 28, Monash University, VIC 3800, [email protected]

Tiexiang LiDept. of MathematicsSoutheast University, Nanjing 211189, People’s Republic of [email protected]

Wen-Wei LinDept. of Applied MathematicsNational Chiao Tung University, Hsinchu 300, [email protected]

Peter Chang-Yi WengSchool of Mathematical SciencesBuilding 28, Monash University, VIC 3800, [email protected]

Talk 2. Accurate solutions of nonlinear matrix equations inqueueing modelsIn this talk, we discuss numerical issues arising in finiteprecision implementations of iterative methods for solvingnonlinear matrix equations arising in queueing models.Exploring the structure of the problem, we shall present somenumerically more stable implementations. A rounding erroranalysis together with numerical examples are given todemonstrate the higher accuracy achieved by the refinedimplementations.Qiang YeDept. of MathematicsUniversity of Kentucky, Lexington KY 40506-0027, [email protected]

Talk 3. A numerical approach for solving nonlinear matrixequations in economic dynamicsModern economic theory views the economy as a dynamicalsystem. The dynamics includes changes over time of marketbehavior such as consumption, investment, labor supply, andtechnology innovation. To make analytical analysis, the naiveapproach is to set up the Euler equation and subsequently solveit by finding the policy functions. Indeed, this is a process ofsolving the nonlinear matrix equation. In this work, we proposea Newton iterative scheme on approximating the unknown policyfunctions. Applications to the neoclassical growth model withleisure choice are used to demonstrate the working of the idea.Matthew M. LinDept. of MathematicsNational Chung Cheng [email protected]

Moody T. ChuDept. of Applied MathematicsNorth Carolina State [email protected]

Chun-Hung KuoDept. of Economics

North Carolina State [email protected]

Talk 4. A structure-preserving doubling algorithm forquadratic eigenvalue problems arising from time-delaysystemsWe propose a structure-preserving doubling algorithm for aquadratic eigenvalue problem arising from the stability analysisof time-delay systems. We are particularly interested in theeigenvalues on the unit circle, which are difficult to estimate.The convergence and backward error of the algorithm areanalyzed and three numerical examples are presented. Ourexperience shows that our algorithm is efficient in comparison tothe few existing approaches for small to medium size problem.Tiexiang LiDept. of MathematicsSoutheast University, Nanjing 211189, People’s Republic of [email protected]

Eric King-wah ChuSchool of Mathematical SciencesBuilding 28, Monash University 3800, [email protected]

Wen-Wei LinDept. of Applied MathematicsNational Chiao Tung University, Hsinchu 300, [email protected]

MS 43. Challenges for the solution andpreconditioning of multiple linear systems - Part II ofIITalk 1. Low-rank techniques for parameter-dependent linearsystems and eigenvalue problemsMotivated by the need for quantifying the impact ofuncertainties in engineering applications, recently a number ofnew approaches for solving parameter-dependent and stochasticPDEs have been developed. In particular, this includes sparsegrid Galerkin and collocation methods, as well as low-ranktechniques. Depending on the regularity of the parameterdependence, these methods are able to deal with potentiallymany parameters. The aim of this talk to provide a survey ofrecent work on low-rank techniques for solving linear systemsarising from the spatial discretization of parameter-dependentPDEs. The extension of these techniques to generalparameter-dependent eigenvalue problems is nontrivial and willalso be briefly discussed.Christine ToblerMATHICSE-ANCHPEPF Lausanne, [email protected]

Daniel KressnerMATHICSE-ANCHPEPF Lausanne, [email protected]

Talk 2. Recycling Krylov subspace information in sequencesof linear systemsMany problems in numerical simulations in physics require thesolution of long sequences of slowly changing linear systems.One problem that is of interest to us arises in Lattice QCDsimulations, e.g., while computing masses of elementaryparticles. In each time step, we have to solve a linear systemwith a Dirac operator which changes slightly from time step totime step. Based on the work of M. Parks, E. de Sturler etal.(Recycling Krylov subspaces for sequences of linear systems,

Page 45: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

45 2012 SIAM Conference on Applied Linear Algebra

SIAM J. on Scientific Computing, 28(2006), 1651− 1674) wewill show how the cost of solving subsequent systems is reducedby recycling selected subspaces generated for previous systems.Furthermore, we have investigated how the algorithm behaveswhen we use the solution of the previous system as an initialguess and also, when we use different kinds of extrapolations ofthe previous solutions as an initial guess. We will show theseresults and the effectiveness of the algorithm in comparison tothe algorithm that solves each system separately.Nemanja BozovicFachbereich C, Mathematik und NaturwissenschaftenBergische Universitat [email protected]

Talk 3. Efficiently updating preconditioners in quantumMonte Carlo simulationsIn the quantum Monte Carlo method for computing properties ofmaterials, we need to solve a very long sequence of linearsystems arising from importance sampling. Each system in thesequence is a so-called Slater matrix, and the matrices change byone row (or a few rows) at a time corresponding to small movesby particles. We will combine a method for incremental matrixreordering with multilevel preconditioning to develop effectivepreconditioners that can be recycled efficiently.Arielle Grim McNallyDepartment of MathematicsVirginia Tech, Blacksburg, Va., [email protected]

Eric de SturlerDepartment of MathematicsVirginia Tech, Blacksburg, Va., [email protected]

Kapil AhujaMax Planck Institute for Dynamics of Complex Technical SystemsMagdeburg, [email protected]

Li MingDepartment of MathematicsVirginia Tech, Blacksburg, Va., [email protected]

Talk 4. A domain decomposition preconditioned recyclingGMRES for stochastic parabolic PDEWe discuss an implicit space-time approach for solvingstochastic parabolic PDEs. We first decouple the space-timediscretized stochastic equation into some uncoupleddeterministic systems by using a Karhunen-Loeve expansion anddouble orthogonal polynomials. And then a domaindecomposition method is combined with recycling GMRES tosolve the large number of systems with similar structures. Wereport experiments obtained on a parallel computer with a largenumber of processors. This is a joint work with Cui Cong.Xiao-Chuan CaiDepartment of Computer ScienceUniversity of Colorado at BoulderBoulder, Co., [email protected]

MS 44. Different perspectives on conditioning andnumerical stability - Part II of IITalk 1. Accuracy and sensitivity of Monte Carlo matrixmultiplication algorithmsRandomized matrix multiplication algorithms were designed toapproximate very large matrix products for which a

deterministic algorithm is prohibitively expensive. For analgorithm introduced by Drineas, Kannan, and Mahoney, weanalyze the error resulting from randomization and derive abound that improves existing results. In addition, we formulate ameasure for the sensitivity of the algorithm to perturbations inthe input and present a bound for this measure. We also comparethe sensitivity of the randomized algorithm to the sensitivity ofthe deterministic algorithm.John T. HolodnakDept. of MathematicsNorth Carolina State UniversityRaleigh, NC 27695, [email protected]

Ilse C. F. IpsenDept. of MathematicsNorth Carolina State UniversityRaleigh, NC 27695, [email protected]

Talk 2. Hyperdeterminant and the condition number of amultilinear systemThe condition number of a problem is the reciprocal of itsnormalized distance to the nearest ill-posed instance in that classof problems. We shall see that in solving a system of multilinearequations, the set of ill-posed problems is given by the set ofcoefficient tensors with vanishing hyperdeterminant. Theparallel with matrices goes further: the hyperdeterminant of atensor is zero iff it has a zero singular value; and in the case of asymmetric tensor, iff it has a zero eigenvalue. Moreover, thecriterion is invariant under Gaussian elimination orHouseholder/Givens transformations applied to ‘all k sides’ ofthe tensor viewed as a k-dimensional hypermatrix.Lek-Heng LimDepartment of StatisticsUniversity of [email protected]

Talk 3. Condition numbers and backward errors in functionalsettingIn this talk we present a functional perturbation results, i.e., afunctional normwise backward error, for the PDE eigenvalueproblems. Inspired by the work of Arioli et al. for linear systemsarising from the finite element discretization of boundary valueproblems we extend the ideas of the functional CompatibilityTheorem and condition numbers to eigenvalue problems in theirvariational formulation. Moreover, we discuss a new spacewisebackward error for PDEs using the componentwise erroranalysis, i.e., by performing the error analysis using theso-called hypernorms of order k.Agnieszka MiedlarInstitute of MathematicsTechnical University of [email protected]

Mario ArioliSTFC Rutherford Appleton [email protected]

Talk 4. Orthogonality and stability in large-sparse-matrixiterative algorithmsMany iterative algorithms for large sparse matrix problems arebased on orthogonality, but this can rapidly be lost using vectororthogonalization (subtracting multiples of earlier vectors fromthe latest vector to produce the next orthogonal vector). Yetthese are among the best algorithms we have, and include theLanczos process, CG, Golub and Kahan bidiagonalization, and

Page 46: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 46

MGS-GMRES.Here we describe a form of orthogonal matrix that arises fromany sequence of supposedly orthogonal vectors. We illustratesome of its properties, including a beautiful measure oforthogonality of the original set of vectors. We will show howthis leads to new concepts of conditioning and stability for thesealgorithms.Chris PaigeSchool of Computer ScienceMcGill [email protected]

Wolfgang WullingW2 Actuarial & Math Services [email protected]

MS 45. Recent advances in model reduction - Part IIof IITalk 1. Automating DEIM for nonlinear model reductionThe discrete empirical interpolation method (DEIM) can providespectacular dimension and complexity reduction for challengingsystems large scale nonlinear ordinary differential equations.The DEIM replaces orthogonal projection of POD with aninterpolatory projection of the nonlinear term that only requiresthe evaluation of a few selected components. However, theimplementation at present is intrusive in the sense that a usermust provide a scheme for evaluating the selected componentsof the nonlinear terms in order to integrate the reduced ordersystem. We have utilized some of the techniques of automaticdifferentiation to fully automate this process. In particular, thisapproach will automatically generate a new code derived directlyfrom the original C-code for the right hand side of a first orderODE.D.C. SorensenDept. of Computational and Applied Math.Rice [email protected]

Russell L. CardenDept. of Computational and Applied Math.Rice [email protected]

Talk 2. Model reduction for optimal control problems infield-flow fractionationWe discuss the application of model order reduction to optimalcontrol problems governed by coupled systems of theStokes-Brinkmann and advection diffusion equations. Suchproblems arise in field-flow fractionation processes for theefficient and fast separation of particles of different size inmicrofluidic flows. Our approach is based on a combination ofbalanced truncation and tangential interpolation for modelreduction of the semidiscretized optimality system. Numericalresults demonstrate the properties of this approach.Tatjana StykelInstitut fur MathematikUniversitat [email protected]

Carina WillboldInstitut fur MathematikUniversitat [email protected]

Talk 3. Numerical implementation of the iterative rationalKrylov algorithm for optimalH2 model order reduction

The Iterative Rational Krylov (IRKA) algorithm for model orderreduction (Gugercin, Antoulas, Beattie 2008.) has recentlyattracted attention because of its effectiveness in real worldapplications, as well as because of its mathematical elegance.Our current work is focused on the development of efficient andnumerically reliable mathematical software that implements theIRKA algorithm. The first step is, necessarily, a theoretical studyof the algorithm. We analyze the convergence of fixed pointiterations that run in the background of IRKA, in particular themorphology of the mapping σ(k+1) = φ(σ(k)) (fixed points,periodic points and their classification). Other theoretical issuesinclude perturbation theory to analyze stability of the shifts,revealing relevant condition numbers, Cauchy–like structure ofcertain key matrices, connection of the fixed point iterations andpole placement, proper stopping criterion that translates into abackward stability relation, etc.Besides rich theory, IRKA also offers many numericalchallenges. How to implement the algorithm efficiently usingdirect or iterative solvers forV(σ) = ((σ1I−A)−1b, . . . , (σrI−A)−1b),W(σ) = ((σ1I−AT )−1c, . . . , (σrI−AT )−1c), and how todeal with numerical rank deficiency? How to adapt iterativesolvers in an inner loop that communicates with the outer fixedpoint iterations loop? When to stop? This requires estimates intheH2 space, and we show to combine them together with theusual stopping scheme for fixed point iterations. All these andmany other questions (e.g. implementations on modern parallelarchitectures such as CPU/GPU) are analyzed during softwaredevelopment. We will give some answers and illustrate theperformances of the software package.Zlatko DrmacDept. of MathematicsUniversity of [email protected]

Christopher BeattieDept. of MathematicsVirginia Polytechnic Institute and State [email protected]

Serkan GugercinDept. of MathematicsVirginia Polytechnic Institute and State [email protected]

Talk 4. Low rank deflative/iterative solutions of Lur’eequationsThe bottleneck of model reduction by passivity-preservingbalanced truncation is the numerical solution of Lur’e equations.A typical approach is an a priori perturbation leading to analgebraic Riccati equation. This is however, from an analyticalpoint of view, insufficiently understood and, from a numericalpoint of view, leads to an ill-conditioned problem. Hence we arefollowing an alternative approach that is basically consisting oftwo steps:

a) ’Filter out’ the ’critical part’ of the Lur’e equations: Thiswill lead us to an eigenproblem of moderate complexity,the outcome is an algebraic Riccati equation on somesubspace.

b) Solve algebraic Riccati equation on the subspace.We show that this method provides low rank approximativesolutions of the Lur’e equations. Especially this makes thepresented method feasible for large-scale model order reductionproblems.

Page 47: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

47 2012 SIAM Conference on Applied Linear Algebra

Timo ReisFachbereich MathematikUniversitat [email protected]

Federico PoloniInstitut fur MathematikTechnische Universitat [email protected]

MS 46. Structured solution of nonlinear matrixequations and applications - Part II of IITalk 1. Inertia and rank characterizations of the expressionsA−BXB∗ − CY C∗ and A−BXC∗ ± CX∗B∗In this talk we consider the admissible inertias and ranks of theexpressions A−BXB∗ − CY C∗ and A−BXC∗ ± CX∗B∗with unknowns X and Y in the four cases when theseexpressions are: (i) complex self-adjoint, (ii) com- plexskew-adjoint, (iii) real symmetric, (iv) real skew symmetric. Wealso pro- vide a construction for X and Y to achieve the desiredinertia/rank, that uses only unitary/orthogonal transformationthus leading to a numerically reliable construction.Consequently, necessary and sufficient solvability conditions formatrix equations

A−BXB∗ − CY C∗ = 0;

andA−BXC∗ ± CX∗B∗ = 0

are provided.Delin ChuDept. of MathematicsNational University of Singapore, [email protected]

Talk 2. Structure-preserving Arnoldi-type algorithm forsolving eigenvalue problems in leaky surface wavepropagationWe study the generalized eigenvalue problems (GEP) arisingfrom modeling leaky surface waves propagation in a acousticresonator with infinitely many periodically arranged interdigitaltransducers. The constitution equations are discretized by finiteelement method with mesh refinement along the electrodeinterface and corners. The associated GEP is then transformed toa T-palindromic quadratic eigenvalue problem so that theeigenpairs can be accurately and efficiently computed by usingstructure-preserving algorithm with a generalizedT-skew-Hamiltonian implicity-restarted Arnoldi method. Ournumerical results show that the eigenpairs produced by theproposed structure-preserving method not only preserve thereciprocal property but also possess high efficiency and accuracy.Tsung-Ming HuangDept. of MathematicsNational Taiwan Normal University, Taipei 116, [email protected]

Wen-Wei LinDept. of Applied MathematicsNational Chiao Tung University, Hsinchu 300, [email protected]

Chin-Tien WuDept. of Applied MathematicsNational Chiao Tung University, Hsinchu 300, [email protected]

Talk 3. Structure-preserving curve for symplectic pairs

We study the stabilizing solution of the equationX +A∗X−1A = Q, where Q is Hermitian positive definite.We construct a smooth curve, parameterized by t ≥ 1, ofsymplectic pairs with a special structure, in which the curvepasses through all iteration points generated by the knownnumerical methods, including the fixed-point iteration,structured preserving doubling algorithm and Newton’s method,under specified conditions. We give a necessary and sufficientcondition for the existence of this structured symplectic pairsand characterize the behavior of this curve. We also use thiscurve to measure the convergence rates of these numericalmethods. Some numerical results are presented.Yueh-Cheng KuoDept. of Applied MathematicsNational University of Kaohsiung, Kaohsiung, 811, [email protected]

Shih-Feng ShiehDept. of MathematicsNational Taiwan Normal University, Taipei, [email protected]

Talk 4. A doubling algorithm with shift for solving anonsymmetric algebraic Riccati equationIn this talk, we analyze a special instance of nonsymmetricalgebraic matrix Riccati equation (NARE) arising from transporttheory. Traditional approaches for finding the minimalnonnegative solution of NARE are based on the fixed pointiteration and the speed of the convergence is linear. Recently, astructure-preserving doubling algorithm (SDA) with quadraticconvergence is designed for improving the speed of convergence.Our contribution is to show that applied with a shifted technique,the SDA is guaranteed to converge quadratically with nobreakdown. Also, we modify the conventional simple iterationalgorithm to dramatically improve the speed of convergence.Chun-Yueh ChiangCenter for General EducationNational Formosa [email protected]

Matthew M. LinDepartment of MathematicsNational Chung Cheng [email protected]

MS 47. Generalized inverses and applications - Part IIof IITalk 1. On a partial order defined on certain matricesFor a given matrix A ∈ Cm×n, we recall that the weightedMoore-Penrose inverse with respect to two Hermitian positivedefinite matrices M ∈ Cm×m and N ∈ Cn×n is the uniquesolution X ∈ Cn×m satisfying the equations: AXA = A,XAX = X , (MAX)∗ = MAX , (NXA)∗ = NXA. Thismatrix will be used to define a partial order on certain class ofcomplex matrices. Some properties on predecessors andsuccessors of a given matrix in that class will be established.This paper has been partially supported by Universidad Nacionalde La Pampa (Facultad de Ingenierıa) of Argentina (grant Resol.N. 049/11) and by the Ministry of Education of Spain (GrantDGI MTM2010-18228).Nestor ThomeInstituto Universitario de Matematica MultidisciplinarUniversitat Politecnica de Valencia, [email protected]

Araceli Hernandez

Page 48: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 48

Facultad de IngenierıaUniversidad Nacional de La Pampa, General Pico, [email protected]

Marina LattanziFacultad de Ciencias Exactas y NaturalesUniversidad Nacional de La Pampa, Santa Rosa, [email protected]

Fabian UrquizaFacultad de IngenierıaUniversidad Nacional de La Pampa, General Pico, [email protected]

Talk 2. Generalized inverses and path productsLet M = [aij ] be a lower triangular matrix over an arbitrary ringwith unity 1, and set ai := aii. We may split M asM = D +N , where D = diag(a1, . . . , an) and N = M −D.The latter is strictly lower triangular and hence nilpotent.Associated with the matrix M , we will consider the weighteddigraph G = (V,E) where V = Si is a set of nodes (or sites)and E = (Si, Sj) ∈ V × V is a collection of arcs such that(Si, Sj) ∈ E if aij 6= 0. We add a loop at site Si if ai 6= 0. Wewill relate the existence of the generalized inverse, namely vonNeumann, group and the Moore-Penrose inverse, of a lowertriangular matrix again gives a matrix of the same type by meansof path products obtained in the graph associated to the matrix.Pedro PatrıcioDepartamento de Matematica e AplicacoesUniversidade do Minho, [email protected]

R. HartwigDept. of MathematicsN.C.S.U. Raleigh, [email protected]

Talk 3. On structured condition numbers for a linearfunctional of Tikhonov regularized solutionA structured componentwise and normwise perturbation analysisof Tikhonov regularization problems are presented. Thestructured matrices we consider include: Toeplitz, Hankel,Vandermonde, and Cauchy matrices. Structured normwise,mixed and componentwise condition numbers for theseTikhonov regularization problems are introduced and theirexpressions are derived. Such expressions for many other classesof matrices can be similarly derived. By means of the powermethod, the fast condition estimated algorithms are proposed.The condition numbers and perturbation bounds are examinedon some numerical examples and compared with unstructurednormwise, mixed and componentwise condition numbers. Thenumerical examples show that the structured mixed conditionnumbers give better perturbation bounds than others.Yimin WeiDepartment of MathematicsFudan University, P.R. [email protected]

Huaian DiaoSchool of Mathematics and StatisticsNortheast Normal University, Chang Chun 130024, P.R. [email protected]

Talk 4. Explicit characterization of the Drazin indexLet B (X) be the set of bounded linear operators on a Banachspace X , and A ∈ B (X) be Drazin invertible. An elementB ∈ B (X) is said to be a stable perturbation of A if B is Drazininvertible and I −Aπ −Bπ is invertible, where I is the identityoperator on X , Aπ and Bπ are the spectral projectors of A andB respectively. Under the condition that B is a stable

perturbation of A, a formula for the Drazin inverse BD isderived. Based on this formula, a new approach is provided tothe computation of the explicit Drazin indices of certain 2× 2operator matricesQingxiang XuDepartment of MathematicsShanghai Normal University, Shanghai, 200234, PR [email protected]

MS 48. Parallelization of efficient algorithmsTalk 1. A highly scalable error-controlled fast multipolemethodWe present a linear scaling, error-controlled FMMimplementation for long-range interactions of particle systemswith open, 1D, 2D and 3D periodic boundary conditions.Similarly to other fast summation algorithms the FMM allows toreduce the total runtime significantly. This runtime advantagehowever comes with considerably increased memoryrequirements posing constraints to the overall particle systemsize. In this talk we focus on the reduced memory footprint aswell as the communication pattern for trillions of particles. Thecurrent code is designed to lower the memory consumption toonly 45Byte/particle already including a small 16.2% paralleloverhead for 300k MPI ranks.Ivo KabadshowJulich Supercomputing CentreResearch Centre [email protected]

Holger DachselJulich Supercomputing CentreResearch Centre [email protected]

Talk 2. A parallel fast Coulomb solver based onnonequispaced Fourier transformsThe fast calculation of long-range Coulomb interactions is acomputational demanding problem in particle simulation.Therefore, several fast approximate algorithms have beendeveloped, which reduce the quadratic arithmetic complexity ofthe plain summation to linear complexity (up to a logarithmicfactor). This talk focuses on Fourier based methods with specialattention to the application of the nonequispaced fast Fouriertransform and its parallelization.We present a massively parallel Coulomb solver software libraryfor distributed memory architectures. The underlying fastalgorithms for periodic and non-periodic boundary conditionswill be explained and extensive performance evaluations will bepresented.Michael PippigDept. of MathematicsChemnitz University of [email protected]

Talk 3. Generalized fast Fourier transforms via CUDAThe fast Fourier transform (FFT) belongs to the algorithms withlarge impact on science and engineering. By appropriateapproximations, this scheme has been generalized for arbitraryspatial sampling points. We discuss the computational costs indetail for this so called nonequispaced FFT and its variations.Because of the evolution of programmable graphic processorunits into highly parallel, multithreaded, manycore processorswith enormous computational capacity and very high memorybandwith, we parallelized the nonequispaced FFT by means of

Page 49: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

49 2012 SIAM Conference on Applied Linear Algebra

the so called Compute Unified Device Architecture (CUDA)using the CUDA FFT library and a dedicated parallelization ofthe approximation scheme.Susanne KunisDept. of MathematicsUniversity of [email protected]

Talk 4. Efficient regularization and parallelization for sparsegrid regressionRegression, (high-dimensional) function reconstruction fromscattered data, is a common problem in data-driven tasks.Typically, meshfree methods are employed to circumvent thecurse of dimensionality. To deal with regression by discretizingthe feature space, sparse grids can be employed. Due to theirprimarily data-independent approach, sparse grids enable one todeal with massive amounts of data. Adaptive refinement thenallows to adapt to the pecularities of the problem at hand.To be able to deal with large, noisy datasets, efficient algorithmsand parallelizations have to be employed, and the full potentialof modern hardware architectures has to be exploited. This canbe complicated having to deal with hierarchical basis functionson multiple levels. We present the special challenges posed tohierarchical and recursive sparse grid algorithms and discussefficient solutions for regularization and parallelization.Dirk PflugerDept. of InformaticsTechnische Universitat [email protected]

Alexander HeineckeDept. of InformaticsTechnische Universitat [email protected]

MS 49. Analysis and computation on matrix manifoldTalk 1. Best low multilinear rank approximation of symmetrictensors by Jacobi rotationsWe consider third-order symmetric tensors and seek their bestlow multilinear rank approximations. The proposed algorithm isbased on Jacobi rotations and symmetry is preserved at eachiteration. Examples are provided that illustrate the need of suchalgorithms. Our algorithm converges to stationary points of theobjective function. The proof of convergence can be consideredas an advantage of the algorithm over existingsymmetry-preserving algorithms in the literature.Pierre-Antoine AbsilDepartment of Mathematical Engineering, ICTEAM InstituteUniversite catholique de [email protected]

Mariya IshtevaSchool of Computational Science and EngineeringGeorgia Institute of [email protected]

Paul Van DoorenDepartment of Mathematical Engineering, ICTEAM InstituteUniversite catholique de [email protected]

Talk 2. Differential geometry for tensors with fixedhierarchical Tucker rankA number of authors have recently proposed several geometriesfor rank-structured matrix and tensor spaces, namely formatrices or tensors with fixed matrix, Tucker, and TT rank. Inthis talk we present a unifying approach for establishing a

smooth, differential structure on the set of tensors with fixedhierarchical Tucker rank. Our approach describes this set as asmooth submanifold, globally embedded in the space of realtensors. The previous spaces are shown to be specific instancesof a particular hierarchical Tucker tree (possibly with additionalconstraints on the frames). As numerical example we show howthis geometry can be used to dynamically update a time-varyingtensor. This approach compares favorable to point-wiseSVD-based computations.Bart VandereyckenMathematics Section, ANCHPEcole Polytechnique Federale de [email protected]

Andre UschmajewDepartment of MathematicsTechnische Universitat (TU) [email protected]

Talk 3. Deterministic approaches to the Karcher mean ofpositive definite matricesWe propose a deterministic approach to the Karcher mean ofpositive definite matrices via geometric power means. For amatrix geometric mean G, we construct one-parameter group Gtof geometric means varying continuously over t ∈ [−1, 1] \ 0and approaching the Karcher mean as t→ 0. Each of thesemeans arises as unique positive definite solution of a non-linearmatrix equation and has the distance less ≤

√t/2 to the

Karcher mean. This provides not only a structured anddeterministic sequence of matrix means converging to theKarcher mean, but also a simple proof of the monotonicity of theKarcher mean, conjectured by Bhatia and Holbrook, and othernew properties, which have recently been established by Lawsonand Lim and also Bhatia and Karandikar using probabilisticmethods on the metric structure of positive definite matricesequipped with the trace metric.Yongdo LimDept. of MathematicsKyungpook National [email protected]

Talk 4. The Karcher mean: first and second orderoptimization techniques on matrix manifoldsIn this talk, we present a collection of implementations for theKarcher mean, which is a specific instance of the matrixgeometric mean. The Karcher mean is defined as the solution toan optimization problem on the manifold of positive definitematrices, where it exhibits an appealing analogy with thearithmetic mean. Generalization of classical optimizationschemes results in Riemannian optimization, where the intrinsicproperties of the manifold are maintained and exploitedthroughout the algorithm. We examine several optimizationtechniques, such as SD, CG, Trust Region, and BFGS, andcompare the results with the ALM, BMP and CHEAP meanalgorithms.Ben JeurisDept. of Computer ScienceKatholieke Universiteit [email protected]

Raf VandebrilDept. of Computer ScienceKatholieke Universiteit [email protected]

Bart VandereyckenMathematics Institute of Computational Science and Engineering

Page 50: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 50

Ecole Polytechnique Federale de [email protected]

MS 50. Advanced methods for large eigenvalueproblems and their applicationsTalk 1. DQDS with aggressive early deflation for computingsingular valuesThe DQDS algorithm is the standard method for computing allthe singular values of a bidiagonal matrix with high relativeaccuracy. Its efficient implementation is now available as aLAPACK subroutine DLASQ. In order to reduce the DQDSruntime, we incorporate into DQDS a technique calledaggressive early deflation, which has been applied successfullyto the Hessenberg QR algorithm. In addition, a shift-free versionof our algorithm has a potential to be parallelized in a pipelinedfashion. Our mixed forward-backward stability analysis provesthat with our proposed deflation strategy, DQDS computes allthe singular values to high relative accuracy.Kensuke AishimaDept. of Mathematics of InformaticsUniversity of TokyoKensuke [email protected]

Yuji NakatsukasaSchool of MathematicsUniversity of [email protected]

Ichitaro YamazakiDept. of Computer ScienceUniversity of [email protected]

Talk 2. A scalable parallel method for large-scale nonlineareigenvalue problemsIn this presentation, we present parallel software for nonlineareigenvalue problems that has interfaces for PETSc. The softwareimplements an eigensolver based on contour integral. It finds apartial set of eigenpairs of large sparse nonlinear eigenvalueproblems and has high scalability. We demonstrate parallelperformance of the implemented method through numericalexperiments arising from several practical applications.Kazuma YamamotoDept. of Computer ScienceUniversity of [email protected]

Yasuyuki MaedaDept. of Computer ScienceUniversity of [email protected]

Yasunori FutamuraDept. of Computer ScienceUniversity of [email protected]

Tetsuya SakuraiDept. of Computer ScienceUniversity of [email protected]

Talk 3. Application of the Sakurai-Sugiura method in the fieldof density functional theory on highly parallel systemsDensity Functional Theory (DFT) is one of the most importantmethods in computational material science. Despite the steadilyincreasing computational power, computation time is still thelimiting factor for many systems of interest. In many cases themost time consuming part is solving a series of generalizedeigenvalue problems that emerge inside an iterative loop. In the

context of Siesta, a widely spread DFT software, theSakurai-Sugiura method is a promising approach. This talk willshow how, due to its three obvious levels of parallelization, thisalgorithm scales much better than other widely used libraries(e.g. ScaLAPACK), how it offers possibilities to use many GPUsin parallel, and how it also supports dealing with sparse matrices.Georg HuhsBarcelona Supercomputing Center,Centro Nacional de Supercomputacion, Barcelona, [email protected]

Talk 4. MERAM for neutron physics applications using YMLenvironment on post petascale heterogeneous architectureEigenvalue problem is one of the key elements in neutronphysics applications. Studying and designing efficient andsacalable eigenvalue solvers is thus necessary for post petascaleneutrons physics applications. In this talk, after recalling theprinciple of multiple explicitly restarted Arnoldi method(MERAM), we will present the design model used for itsimplementation on distributed heterogeneous architectures. Theperformance results on CURIE and GRID5000 platformsmaking use of the software environments KASH (Krylov bAsedSolvers for Hybrid architectures) YML and PETSc/SLEPc fortypical neutron physics problems will be presented.Christophe CalvinCEA DEN DANS DM2S SERMA CEA [email protected]

Nahid EmadUniversity of [email protected]

Serge G. PetitonINIRIA University of [email protected]

Jerome DuboisCEA DEN DANS DM2S SERMA CEA [email protected]

Makarem DandounaUniversity of [email protected]

MS 51. Accurate and verified numerical computationsfor numerical linear algebraTalk 1. Product decomposition and its applicationsA product decomposition algorithm of a real number on floatingpoint system is proposed. The product decomposition of a realnumber x is a floating point decomposition defined by

x ' x1 (1 + x2) (1 + x3) · · · (1 + xn) ,

where xi denote floating point numbers (1 ≤ i ≤ n). x1 impliesan approximation of x and xi (2 ≤ i ≤ n) involveapproximations of relative errors of an approximate value by thedecomposition using x1, x2, · · · , xi−1. This decomposition isused in numerical analysis to calculate the accurate logarithmvalue and so on. In this talk we present an efficient algorithm toconstruct the product decomposition and its applications.Naoya YamanakaResearch Institute for Science and Engineering, Waseda Universitynaoya [email protected]

Shin’ichi OishiFaculty of Science and Engineering, Waseda [email protected]

Talk 2. The MPACK: multiple precision version of BLAS andLAPACK

Page 51: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

51 2012 SIAM Conference on Applied Linear Algebra

We have been developing a multiple precision version of linearalgebra package based on BLAS and LAPACK. We translatedand reimplimented FORTRAN by C++, and the MPACKsupports GMP, MPFR, and QD (DD) multiple precisionlibraries; users can choose the library on their needs. CurrentlyBLAS part is completed and 100 LAPACK routines areimplemented and well tested. Moreover DD version ofmatrix-matrix multiplication routine has been accelerated usingNVIDIA C2050 GPU. Development is ongoing athttp://mplapack.sourceforge.net/, and available under opensource (2-clause BSD) license.Maho NakataAdvanced Center for Computing and Communication, [email protected]

Talk 3. On eigenvalue computations of nonderogatorymatricesIn this talk I present some problems and results concerning theeigenvalue problem of nonderogatory matrices. The first groupof problems is related to the detection of multiple eigenvalues ofunreduced upper Hessenberg matrices and their refinements inmultiple floating point arithmetic. The second group of problemsis the perturbation of invariant subspaces and itscharacterizations in terms of the matrix perturbation.Aurel GalantaiInstitute of Intelligent Engineering SystemsObuda University, 1034 Budapest, Becsi ut 96/b, [email protected]

Talk 4. Verified solutions of sparse linear systemsAlgorithms for calculating verified solutions of sparse linearsystems are proposed. The proposed algorithms are based onstandard numerical algorithms for a block LDLT factorizationand error estimates for specified eigenvalues by Lehmann’stheorem. Numerical results are presented for illustrating theperformance of the proposed algorithms.Takeshi OgitaDivision of Mathematical SciencesTokyo Woman’s Christian [email protected]

MS 52. Numerical linear algebra libraries for high endcomputing - Part I of IITalk 1. Large-scale eigenvalue computation with PETSc andYMLIn the context of parallel and distributed computation, thecurrently existing numerical libraries do not allow sequential andparallel code reuse. Besides, they are not able to exploit themulti-level parallelism offered by modern emerging numericalmethods.In this talk, we present a design model for numerical librariesbased on a component approach allowing code reuse andproblem solving scalability. We present then, an implementationof this design using YML scientific workflow environmentjointly with the object oriented library PETSc. Some numericalexperiments on GRID5000 platform and NERSC computersvalidate our approach and show its efficiency.Makarem DandounaPRISM laboratory, University of Versailles, [email protected]

Nahid EmadPRISM laboratory University of Versailles, [email protected]

L. A. (Tony) DrummondComputational Research Division, Lawrence Berkeley NationalLaboratory, [email protected]

Talk 2. Sparse matrix-matrix operations in PETScSparse matrix-matrix operations, A ∗B, AT ∗B (or A ∗BT )and PT ∗A ∗ P (or R ∗A ∗RT ), are computational kernels inthe PETSc library. Recent addition of a geometric-algebraicmultigrid preconditioner requires these matrix operations to bescalable to tens of thousands processors cores, which forces usto take an innovated approach in algorithm design andimplementations for these operations, including some relevantdata structures. In this talk, we will present our newly developedscalable sparse matrix-matrix algorithms, implementations andperformance, along with lessons learned and experiences gainedfrom this work.Hong ZhangDept. of Computer Science Department, Illinois Institute of Technology,USA and Argonne National Laboratory, [email protected]

Barry Smith and PETSc TeamMathematics and Computer Science Division, Argonne NationalLaboratory, [email protected]

Talk 3. Hierarchical QR factorization algorithms formulti-core cluster systemsThis paper describes a new QR factorization algorithm which isespecially designed for massively parallel platforms combiningparallel distributed multi-core nodes. These platforms make thepresent and the foreseeable future of high-performancecomputing. Our new QR factorization algorithm falls in thecategory of the tile algorithms which naturally enables good datalocality for the sequential kernels executed by the cores (highsequential performance), low number of messages in a paralleldistributed setting (small latency term), and fine granularity(high parallelismJulien LangouUniversity of Colorado Denver, USAsupported by the National Science Foundation grant # NSF [email protected]

Jack DongarraUniversity of Tennessee Knoxville, USAOak Ridge National Laboratory, USAManchester University, [email protected]

Mathieu FavergeUniversity of Tennessee Knoxville, [email protected]

Thomas HeraultUniversity of Tennessee Knoxville, [email protected]

Yves RobertUniversity of Tennessee Knoxville, USAEcole Normale Superieure de Lyon, [email protected]

Talk 4. Towards robust numerical algorithms for exascalesimulationThe advent of exascale machines will require the use of parallelresources at an unprecendent scale, leading to a high rate ofhardware faults. High Performance Computing applications thataim at exploiting all these resources will thus need to beresilient, i.e., being able to compute a correct output in presenceof faults. Contrary to checkpointing techniques or Algorithm

Page 52: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 52

Based Fault Tolerant (ABFT) mechanisms, strategies based oninterpolation for recovering lost data do not require extra workor memory when no fault occurs. We apply this latter strategy toKrylov iterative solvers, which are often the most computationalintensive kernels in HPC simulation codes. Our maincontribution is the proposition and discussion of several variantscompared to previous works. For that, we propose a new variantfor recovering data, we study the occurrence of multiple faults,we consider the GMRES, CG and BICGSTAB solvers, and weinject faults according to an advanced model of faultdistribution. We assess the impact of the recovery method, thefault rate and the number of processors on resilience. Ratherthan implementing a particular actual parallel code, we assess allour strategies based on sequential Matlab implementations thatsimulate parallel executions.Emmanuel AgulloINRIA Bordeaux - Sud-Ouest, 351, Cours de la Liberation, BatimentA29 bis, 33405 Talence Cedex [email protected]

Luc GiraudINRIA Bordeaux - Sud-Ouest, [email protected]

Abdou GuermoucheINRIA Bordeaux - Sud-Ouest, [email protected]

Jean RomanINRIA, [email protected]

Mawussi ZounonINRIA, [email protected]

MS 53. Efficient preconditioners for real worldapplications - Part I of IITalk 1. A parallel factored preconditioner for non-symmetriclinear systemsThe efficient solution to non-symmetric linear systems is still anopen issue on parallel computers. In this communication wegeneralize to the non-symmetric case the Block FSAI (BFSAI)preconditioner which has already proved very effective onsymmetric problems arising from different applications. BFSAIis a hybrid approach combining an “inner” preconditioner, withthe aim of transforming the system in a block diagonal one, withan “outer” one, a block diagonal incomplete decomposition,intended to decrease the conditioning of each block. Theproposed algorithm is experimented with in a number of largesize problems showing both good robustness and scalability.Carlo JannaDept. of Mathematical Methods and Models for Scientific ApplicationsUniversity of Padova, [email protected]

Massimiliano FerronatoDept. of Mathematical Methods and Models for Scientific ApplicationsUniversity of Padova, [email protected]

Giorgio PiniDept. of Mathematical Methods and Models for Scientific ApplicationsUniversity of Padova, [email protected]

Talk 2. Preconditioning for linear least-squares problemsIn this talk we deal with iterative methods for solving large andsparse linear least squares problems. In particular we describetwo new preconditioning techniques for the CGLS method. First

we consider the strategy which is based on the LU factorization.Our approach includes a new reordering based on a specificweighting transversal problem. Direct preconditioning of thenormal equations by the balanced symmetric and positivedefinite factorization is our second approach. Numericalexperiments demonstrate effectiveness of the algorithmic andimplementational features of the new approaches.Miroslav TumaInstitute of Computer ScienceAcademy of Sciences of the Czech [email protected]

Rafael BruInstitut de Matematica MultidisciplinarUniversitat Politecnica de [email protected]

Jose MasInstitut de Matematica MultidisciplinarUniversitat Politecnica de [email protected]

Jose MarınInstitut de Matematica MultidisciplinarUniversitat Politecnica de [email protected]

Talk 3. Robust and parallel preconditioners for mechanicalproblemsWe consider the simulation of displacements under loading formechanical problems. Large discontinuities in materialproperties, lead to ill-conditioned systems of linear equations,which leads to slow convergence of the PreconditionedConjugate Gradient (PCG) method. This paper considers theRecursively Deflated Preconditioned Conjugate Gradient(RDPCG) method for solving such systems. Our deflationtechnique uses as deflation space the rigid body modes of sets ofelements with homogeneous material properties. We show thatin the deflated spectrum the small eigenvalues are mapped tozero and no longer negatively affect the convergence. We justifyour approach through mathematical analysis and we show withnumerical experiments on both academic and realistic testproblems that the convergence of our RDPCG method isindependent of discontinuities in the material properties.Kees VuikDelft Institute of Applied MathematicsTechnology University of Delft, The [email protected]

Tom JonsthovelDelft Institute of Applied MathematicsTechnology University of Delft, The [email protected]

Martin van GijzenDelft Institute of Applied MathematicsTechnology University of Delft, The [email protected]

Talk 4. Block factorized forms of SPAIIn this talk we present new results on block versions of sparseapproximate inverse preconditioners M for sparse matrix A. Weconsider the Frobenius norm minimization minM ‖AM − I‖F .Blocked versions are interesting because often the underlyingproblem introduces in a natural way a block structure.Furthermore, they allow a more efficient memory access, andthey reduce the number of least squares problems that have to beconsidered in the construction of the preconditioner M . We areinterested in determining appropriate block patterns for a generalsparse matrix. Therefore, we want to combine columns of M

Page 53: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

53 2012 SIAM Conference on Applied Linear Algebra

with similar least squares problems to blocks in order to reducethe number of least squares problems that have to be solved forconstructing the preconditioner. Furthermore, given an arbitraryblocking we also have to find the nonzero blocks that we have toinclude to derive a good preconditioner.Thomas HuckleInstitut fur InformatikTechnische Universitat Munchen, [email protected]

Matous SedlacekInstitut fur InformatikTechnische Universitat Munchen, [email protected]

MS 54. Solving ill-posed systems via signal-processingtechniques - Part I of IITalk 1. Sequential updates for L1 minimization: sparseKalman filtering, reweighted L1, and moreSparse signal recovery often involves solving an L1-regularizedoptimization problem. Most of the existing algorithms focus onthe static settings, where the goal is to recover a fixed signalfrom a fixed system of equations. In this talk, we present acollection of homotopy-based algorithms that dynamicallyupdate the solution of the underlying L1 problem as the systemchanges. The sparse Kalman filter solves an L1-regularizedKalman filter equation for a time-varying signal that follows alinear dynamical system. Our proposed algorithm sequentiallyupdates the solution as the new measurements are added and theold measurements are removed from the system.Justin RombergElectrical and Computer EngineeringGeorgia Institute of [email protected]

M. Salman AsifElectrical and Computer EngineeringGeorgia Institute of [email protected]

Talk 2. Solving basis pursuit: infeasible-point subgradientalgorithm, computational comparison, and improvementsWe propose a subgradient algorithm called ISAL1 for thel1-minimization (Basis Pursuit) problem which appliesapproximate projections via a truncated CG scheme. We willalso present results of an extensive computational comparison ofour method and various state-of-the-art l1-solvers on a largetestset. It turns out our algorithm compares favorably. Moreover,we show how integrating a new heuristic optimality check calledHSE can improve the solution speed and accuracy of several ofthese solvers.Andreas TillmannInstitute for Mathematical OptimizationTechnische Universitat [email protected]

Dirk LorenzInstitute for Analysis and AlgebraTechnische Universitat [email protected]

Marc PfetschInstitute for Mathematical OptimizationTechnische Universitat [email protected]

Talk 3. Semismooth Newton methods with multi-dimensionalfilter globalization for l1 optimization

We present a class of methods for l1-regularized optimizationproblems. They are based on a flexible combination ofsemismooth Newton steps and globally convergent descentmethods. A multidimensional filter framework is used to controlthe acceptance of semismooth Newton steps. We prove globalconvergence and transition to fast local convergence for both theconvex and the nonconvex case. Numerical results show theefficiency of the method.Andre MilzarekDepartment of Mathematics, TopMathTechnische Universitat [email protected]

Michael UlbrichChair of Mathematical OptimizationTechnische Universitat [email protected]

Talk 4. Improved first-order methods: how to handleconstraints, non-smoothness, and slow convergenceThere are many specialized solvers that solve specific convexprograms efficiently, but few algorithms can deal with generalcomplicated constraints and non-smooth functions. To addressthese difficult problems, we introduce a framework and softwarepackage called TFOCS (Becker/Candes/Grant). The methodrelies on two tricks: dualization and smoothing. This talkdescribes the framework and also discusses recent splittingmethods such as the method by Chambolle and Pock and byCombettes et al. We also cover recent progress in improving theconvergence of first-order algorithms by using non-diagonalpreconditioners (with J. Fadili).Stephen BeckerFSMP, laboratoire Jacques-Louis LionsUPMC (Paris-6)–[email protected]

Jalal FadiliCNRS-ENSICAENUniversite de [email protected]

Emmanuel CandesDepartments of Mathematics and StatisticsStanford [email protected]

Michael GrantCVX [email protected]

MS 55. Max-algebra - Part I of IITalk 1. Tropical bounds for the eigenvalues of structuredmatricesWe establish several inequalities of log-majorization type,relating the moduli of the eigenvalues of a complex matrix ormatrix polynomial with the tropical eigenvalues of auxiliarymatrix polynomials. This provides bounds which can becomputed by combinatorial means. We consider in particularstructured matrices and obtain bounds depending on the normsof block submatrices and on the pattern (graph structure) of thematrix.Marianne AkianINRIA Saclay–Ile-de-France and CMAPCMAP, Ecole Polytechnique, Route de Saclay, 91128 Palaiseau Cedex,[email protected]

Stephane GaubertINRIA Saclay–Ile-de-France and CMAP

Page 54: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 54

CMAP, Ecole Polytechnique, Route de Saclay, 91128 Palaiseau Cedex,[email protected]

Meisam SharifyINRIA Saclay–Ile-de-France and LRILRI, Bat 650 Universite Paris-Sud 11 91405 Orsay Cedex, [email protected]

Talk 2. Sensitivity in extremal systems of linear equations andinequalitiesA survey of some recent results concerning the properties offinite systems of linear equations and inequalities in extremalalgebra will be presented. Problems connected with sensitivityand parametric analysis of such systems will be discussed.Possible applications of the results for post-optimal analysis ofoptimization problems, the set of feasible solutions of which isdescribed by the systems of (max, min) and (max, plus) linearsystems, will be shown. The objective functions of theoptimization problems are expressed as the maximum of finitelymany continuous functions, each depending on one variable.Karel ZimmermannFaculty of Mathematics and PhysicsCharles University in Prague, Czech [email protected]

Martin GavalecFaculty of Informatics and ManagementUniversity of Hradec Kralove, Czech [email protected]

Talk 3. Multiplicative structure of tropical matricesI shall report on a programme of research aiming to understandthe structure of tropical (max-plus) matrix semigroups. Thisstructure turns out to be intimately connected with the geometryof tropical convexity; indeed, it transpires that almost everyalgebraic property of the full n× n tropical matrix semigroupmanifests itself in some beautiful geometric phenomenoninvolving polytopes. Various parts of the programme are jointwork with people including Christopher Hollings, Zur Izhakianand Marianne Johnson.Mark KambitesSchool of MathematicsUniversity of [email protected]

Talk 4. Transience bounds for matrix powers in max algebraIn this talk we demonstrate how the concept of CSR expansionsdeveloped by Schneider and Sergeev helps to unify and comparethe bounds on periodicity transient existing in the literature.Unlike in the theory of graph exponents, these bounds are notstrongly polynomial. To this end, we also present somemax-algebraic extensions of polynomial bounds on graphexponents, due to Wielandt and Schwartz.Sergeı SergeevSchool of MathematicsUniversity of Birmingham, [email protected]

MS 56. Eigenvalue perturbations and pseudospectra -Part I of IITalk 1. Inclusion theorems for pseudospectra of blocktriangular matricesThe ε-pseudospectrum of A ∈ Cn×n, denoted by σε(A), is theunion of the spectra of the matrices A+ E, where E ∈ Cn×nand ‖E‖2 ≤ ε. In this talk we consider inclusion relations of the

form

σf(ε)(A11) ∪ σf(ε)(A22) ⊆ σε

([A11 A12

0 A22

])⊆ σg(ε)(A11) ∪ σg(ε)(A22).

We derive formulae for f(ε) and g(ε) in terms ofsepλ(A11, A22) and ‖R‖2, where R is the solution of theSylvester equation A11R−RA22 = A12.Michael KarowDept. of MathematicsTechnical University of [email protected]

Talk 2. Conjectures on pseudospectra of matricesWe discuss some conjectures concerning coalescence points ofconnected components of pseudospectra of a square complexmatrix A. We call pseudospectrum of A of order j and level ε,Λε,j(A), to the set of eigenvalues of multiplicity ≥ j of matriceswhose distance to A is ≤ ε.• The coalescence points of the components of Λε,j(A) are

points where the pseudospectra Λε,j+1(A) arise from.

• The coalescence points of the components of Λε,1(A) arein line segments connecting eigenvalues of A, andanalogously for Λε,j(A).

Juan-Miguel GraciaDept. of Applied Mathematics and StatisticsUniversity of the Basque Country UPV/EHU, [email protected]

Talk 3. Sensitivity of eigenvalues of an unsymmetrictridiagonal matrixThe Wilkinson condition number ignores the tridiagonal formand so can be unduly pessimistic. We propose several relativecondition numbers that exploit the tridiagonal form. Some ofthese numbers are derived from different factored forms (orrepresentations) of the (possibly shifted) matrix and so they shedlight on which factored forms are best for computation. Weshow some interesting examples.Carla FerreiraMathematics and Applications DepartmentUniversity of Minho, [email protected]

Beresford ParlettUniversity of California, Berkeley

Froilan DopicoUniversidad Carlos III de Madrid, Spain

Talk 4. First order structured perturbation theory foreigenvalues of skew-adjoint matricesThe main goal of structured matrix perturbation theory is toidentify situations where eigenvalues are much less sensitive tostructured perturbations (i.e., those belonging to the same kindof matrices as the unperturbed one) than to unstructured ones. Inthis talk we analyze one such situation: the relevant structure isskew-adjointness with respect to an indefinite scalar product.Explicit formulas are obtained for both the leading exponent andthe leading coefficient of asymptotic perturbation expansionswhen the perturbations are taken to be also skew-adjoint. Usingthe Newton diagram as the main tool, it is shown that the leadingcoefficient depends on both first (i.e., eigenvectors) and second

Page 55: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

55 2012 SIAM Conference on Applied Linear Algebra

vectors in the longest Jordan chains associated with theeigenvalue under study.Julio MoroDepartamento de MatematicasUniversidad Carlos III de MadridMadrid (Spain)[email protected]

Marıa Jose PelaezDepartamento de MatematicasUniversidad Catolica del NorteAntofagasta (Chile)[email protected]

MS 57. Numerical linear algebra and optimization inimaging applications - Part I of IITalk 1. Some numerical linear algebra and optimizationproblems in spectral imagingIn this talk we overview some of our recent work on numericalalgorithms for spectral image analysis. Spectral imaging collectsand processes image information from across theelectromagnetic spectrum (often represented visually as a cube),and has a wide array of modern applications, for example inremote sensing for ecology and surveillance. Here we describesome of our work on the design and analysis of mathematicaltechniques for compressive sensing, processing, and analysis ofspectral data. Topics considered include: random SVD methodsfor dimensionality reduction, (2) joint reconstruction andclassification of spectral images, and (3) applications ofunmixing, clustering and classification methods to targetidentification. Tests on real data are described.This represents is joint work with several people, including teammembers on projects funded by the U.S. Air Force Office ofScientific Research and the National Geospatial-IntelligenceAgency.Bob PlemmonsDepts. of Computer Science and MathematicsWake Forest [email protected]

Jennifer ErwayDept. of MathematicsWake Forest [email protected]

Nicolas GillisDept. of Combinatorics & OptimizationUniversity of [email protected]

Xiaofei HuDept. of MathematicsWake Forest [email protected]

Michael NgDept. of MathematicsHong Kong Baptist [email protected]

Paul PaucaDept. of Computer ScienceWake Forest [email protected]

Sudhakar PrasadDept. of Astronomy and PhysicsUniversity of New [email protected]

Jiani ZhangDept. of MathematicsWake Forest University

[email protected]

Qiang ZhangDept. of Health SciencesWake Forest [email protected]

Talk 2. Image restoration via constrained optimization: anapproach using feasible direction methodsImage restoration is an ill-posed inverse problem which requiresregularization. Regularization leads to reformulate the originalrestoration problem as a constrained optimization problemwhere the objective function is a regularization term and theconstraint imposes fidelity to the data.In this talk, we present a feasible direction method for obtainingrestored images as solutions of the optimization problem. Thepresented method computes feasible search directions byinexactly solving trust region subproblems whose radius isadjusted in order to maintain feasibility of the iterates.Numerical results are presented to illustrate the effectiveness andefficiency of the proposed method.Germana LandiDept. of MathematicsUniversity of [email protected]

Talk 3. On the solution of linear systems in Newton-typemethods for image reconstructionSome imaging applications are usually modeled as aminimization problem with nonnegative constraints on thesolution. For large size problems, projected-Newton methods arevery attractive because of their fast convergence. In order tomake them computationally competitive, the inner linear systemfor the search direction computation should be solved efficiently.In this talk we focus on the solution of the linear system whendifferent objective functions are considered. The objectivefunction is related to the application, to the noise affecting therecorded image and to the kind of regularization chosen.Elena PiccolominiDept. of MathematicsUniversity of [email protected]

Talk 4. Alternating direction optimization for convex inverseproblems. Application to imaging and hyperspectralunmixingIn this talk I will address a new class of fast of algorithms forsolving convex inverse problems where the objective function isa sum of convex terms with possibly convex constraints. Usually,one of terms in the objective function measures the data fidelity,while the others, jointly with the constraints, enforce some typeof regularization on the solution. Several particular features ofthese problems (huge dimensionality, nonsmoothness) precludethe use of off-the-shelf optimization tools and have stimulated aconsiderable amount of research. In this talk, I will present anew class of algorithms to handle convex inverse problemstailored to image recovery applications and to hyperspectralunmixing. The proposed class of algorithms is an instance of theso-called alternating direction method of multipliers (ADMM),for which convergence sufficient conditions are known. Weshow that these conditions are satisfied by the proposedalgorithms. The effectiveness of the proposed approach isillustrated in a series of imaging inverse problems, includingdeconvolution and reconstruction from compressiveobservations, and hyperspectral unmixing problems, including

Page 56: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 56

sparse unmixing and positive matrix factorization.Jose Bioucas-DiasInstituto de Telecomunicaes and Department of Electrical and ComputerEngineering,Instituto Superior Tcnico, [email protected]

Mario FigueiredoInstituto de Telecomunicaes and Department of Electrical and ComputerEngineering,Instituto Superior Tcnico, [email protected]

MS 58. Parametric eigenvalue problems - Part I of IITalk 1. Computing double eigenvalues via the two-parametereigenvalue problemA task of computing all values of the parameter λ, such that thematrix A+ λB has a double eigenvalue, can be interpreted as asingular (quadratic) two-parameter eigenvalue problem. Using anumerical method for the singular two-parameter eigenvalueproblem it is possible to obtain all solutions as eigenvalues of acertain generalized eigenvalue problem.Bor PlestenjakIMFM & Department of MathematicsUniversity of [email protected]

Andrej MuhicIMFM & Department of MathematicsUniversity of [email protected]

Talk 2. Lyapunov inverse iteration for identifying Hopfbifurcations in models of incompressible flowThe identification of instability in large-scale dynamical systemscaused by Hopf bifurcation is difficult because of the problem ofcomputing the rightmost pair of complex eigenvalues of largesparse generalised eigenvalue problems. A method developed in[Meerbergen & Spence, SIMAX (2010), pp.1982-1999] avoidsthis computation, instead performing an inverse iteration for acertain set of real eigenvalues that requires the solution of alarge-scale Lyapunov equation at each iteration. This talkdiscusses a refinement of the method of Meerbergen & Spenceand tests it on challenging problems arising from fluid dynamics.Alastair SpenceDept. of Mathematical SciencesUniversity of Bath, [email protected]

Howard ElmanDept. of Computer ScienceUniversity of Maryland, [email protected]

Karl MeerbergenDept. of Computer ScienceKatholieke Universiteit Leuven, [email protected]

Minghao WuDept. of Applied Mathematics & StatisticsUniversity of Maryland, [email protected]

Talk 3. A quadratically convergent algorithm for matrixdistance problemsWe discuss a method for the computation of the distance of astable matrix to the unstable matrices with respect to the openleft-half plane. First, we provide a fast algorithm to compute thecomplex unstructured stability radius. Second, based on a

formula by Qiu et al. (Automatica, 31 (1995), pp. 879–890) wegive a new fast method to compute the real structured stabilityradius. Both algorithms are based on finding Jordan blockscorresponding to a pure imaginary eigenvalue in aparameter-dependent Hamiltonian eigenvalue problem.Numerical results show the performance of both algorithms forseveral examples.Melina A. FreitagDept. of Mathematical SciencesUniversity of [email protected]

Alastair SpenceDept. of Mathematical SciencesUniversity of [email protected]

Talk 4. A real Jacobi-Davidson algorithm for the2-real-parameter eigenvalue problemWe consider the nonlinear eigenvalue problem

(iωM +A+ e−iωτB)u = 0

for real matrices M,A,B ∈ Rn×n with invertible M . Soughtare triples (ω, τ, u) consisting of a complex eigenvector u ∈ Cnand two real eigenvalues ω and τ .Problems of this type appear e.g. in the search for critical delaysof linear time invariant delay differential equations (LTI-DDEs)

Mx(t) = −Ax(t)−Bx(t− τ).

In [Meerbergen, Schroder, Voss, 2010, submitted] this problemis discussed for complex M,A,B. Like there we are consideringa Jacobi-Davidson-like projection method. The main differencesin the real case are that a) the search space is kept real and b) aspecialized method for the small projected problem is used.Numerical experiments show the effectiveness of the method.Christian SchroderDept. of MathematicsTU [email protected]

MS 59. Structured matrix computations - Part I of IITalk 1. Factorization ofH2-matricesHierarchical matrices (H-matrices) have been shown to be veryuseful tools for a variety of applications, e.g., the construction ofefficient and robust preconditioners for the solution of ellipticpartial differential equations and integral equations. MostH-matrix algorithms are based on a recursive algorithm thatapproximates the product of twoH-matrices by a newH-matrix,since this fundamental algorithm can be combined with simplerecursive techniques to approximate the inverse or the LRfactorization.CombiningH-matrices with multilevel techniques leads toH2-matrices that can reach significantly higher compressionrates, particularly for very large matrices. Although mostalgorithms can take advantage of the rich multilevel structure tosignificantly improve efficiency, the construction ofH2-matricesposes a challenge since the connections between a large numberof blocks have to be taken into account at each step.The talk presents a new approach to the latter task: by adding asmall amount of book-keeping information to the usualH2-matrix structure, it is possible to develop an algorithm thatcomputes a low-rank update of a submatrix G|t×s in an

Page 57: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

57 2012 SIAM Conference on Applied Linear Algebra

H2-matrix in O(k2(#t+ #s)) operations, where k is the localrank.With this new algorithm at our disposal, several importanthigher-level operations become very simple: Low-rankapproximations of submatrices, e.g., computed by popular crossapproximation schemes, can easily be merged into anH2-matrix, leading to an algorithm of complexity O(nk2 logn).The multiplication of twoH2-matrices can be implemented as asequence of low-rank updates, the resulting algorithm also has acomplexity of O(nk2 logn). Using the matrix multiplication,we can also construct the inverse and the LR factorization inO(nk2 logn) operations.Similar toH-matrix techniques, the new algorithm is fullyadaptive, e.g., it can reach any given accuracy, and it is able toeasily handle matrices resulting from the discretization of two-or three-dimensional geometries. Its main advantages overH-matrix algorithms are the significantly reduced storagerequirements and the higher efficiency for large matrices.Steffen BormInstitut fur InformatikChristian-Albrechts-Universitat zu Kiel24118 Kiel, [email protected]

Talk 2. The polynomial root finding problems andquasiseparable representations of unitary matricesThe effective tool to compute all the roots of a polynomial is todetermine the eigenvalues of the corresponding companionmatrix using the QR iteration method. The companion matrixbelongs to the class of upper Hessenberg matrices which arerank one perturbations of unitary matrices. This class is invariantunder QR iterations. Moreover it turns out that for every matrixin this class the corresponding unitary matrix has quasiseparablestructure. This structure may be used to develop fast algorithmsto compute eigenvalues of companion matrices. We discussimplicit fast QR eigenvalue algorithms solving this problem.The obtained algorithm is of complexity O(N) in contrast toO(N2) for non-structured methods.Yuli EidelmanSchool of Mathematical SciencesTel-Aviv UniversityRamat-Aviv 69978, [email protected]

Talk 3. A fast direct solver for structured matrices arisingfrom non-oscillatory integral equationsWe present a fast direct solver for structured dense matricesarising from non-oscillatory integral equations. The solver isbased on (1) multilevel matrix compression techniques thatexploit a complex hierarchical low-rank block structure, and (2)a sparse matrix embedding that allows fast and robust matrixfactorization and inverse application. For boundary integralequations in 2D, the solver has optimal O(N) complexity,where N is the system size; in 3D, it incurs an O(N3/2)precomputation cost, followed by O(N logN) solves.Numerical experiments suggest the utility of our method as botha direct solver and a preconditioner for complex problems.Kenneth L. HoCourant Institute of Mathematical SciencesNew York [email protected]

Leslie GreengardCourant Institute of Mathematical Sciences

New York [email protected]

Talk 4. Multivariate orthogonal polynomials and inverseeigenvalue problemsIt is well known that the computation of the recurrencecoefficients of orthogonal polynomials with respect to a discreteinner product is related to an inverse eigenvalue problem. In thistalk we present an algorithm to compute these recurrencecoefficients for multivariate orthogonal polynomials. Thisalgorithm generalizes previous results for the bivariate case anduses Givens transformations to solve the related inverseeigenvalue problem. We give a number of numerical examplesusing different configurations of points that define the discreteinner product.Matthias HumetDept. of Computer ScienceUniversity of [email protected]

Marc Van BarelDept. of Computer ScienceUniversity of [email protected]

MS 60. Numerical linear algebra libraries for high endcomputing - Part II of IITalk 1. Thick-restart Lanczos methods forsymmetric-indefinite generalized eigenproblems in SLEPcIn this talk we present results on a Lanczos method forgeneralized eigenvalue problems Ax = λBx where both A andB are symmetric matrices but the pair (A,B) is not definite. Inthis case, eigenvalues are not guaranteed to be real and also aneigenvalue can be defective. The standard B-Lanczos cannot beused, but still the symmetry of the matrices can be exploited tosome extent by means of the pseudo-Lanczos process. We showresults of a thick-restart variant implemented in the SLEPclibrary, and compare it with other solvers that totally ignoresymmetry.Jose E. RomanDept. de Sistemes Informatics i ComputacioUniversitat Politecnica de Valencia, [email protected]

Carmen CamposDept. de Sistemes Informatics i ComputacioUniversitat Politecnica de Valencia, [email protected]

Talk 2. Parametric approach to smart-tuning and auto-tuningof the DOE ACTS collectionThe Advanced CompuTational Software (ACTS) Collection is aset of computational tools and libraries developed primarily atDOE laboratories. Here we look at deriving parameters toautomatically identify, at run-time, the most suitable auto-tunedkernels to load with a given application. Additionally, theseparameters can be used in the context of ”Smart-Tuning” toselect the best algorithmic functionality and arguments to alibrary’s API.L. A. (Tony) DrummondComputational Research Division, Lawrence Berkeley NationalLaboratory, [email protected]

Osni MarquesComputational Research Division, Lawrence Berkeley NationalLaboratory, [email protected]

Page 58: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 58

Talk 3. Trilinos: foundational libraries that enablenext-generation computingWith the availability and diversity of powerful computationalresources, including multi-core CPU and GPU technology, thereis significant interest in numerical software libraries that allow adeveloper to optimize the trade-off between effort and impact. Inthis talk we will discuss the current and ongoing efforts bywhich Trilinos is providing enabling technologies for thedevelopment of academic and industrial software targeted atnext-generation architectures. We will cover a wide variety ofTrilinos packages, from the foundational libraries for numericallinear algebra to the solver libraries that can leverage thesefundamental capabilities to develop next-generation algorithms.Heidi ThornquistElectrical Systems Modeling DepartmentSandia National [email protected]

Talk 4. Rethinking distributed dense linear algebraIt is a commonly held misconception that matrices must bedistributed by blocks in order to translate the local computationof classical dense matrix operations into level 3 BLASoperations. In this talk, the performance and programmabilityimplications of element-wise matrix distributions are discussedin the context of a recently introduced implementation,Elemental. In order to be able to effectively convey codesamples, both the FLAME methodology and its extension todistributed-memory computation will be briefly introduced.Jack PoulsonInstitute for Computational Engineering and SciencesThe University of Texas at [email protected]

MS 61. Efficient preconditioners for real worldapplications - Part II of IITalk 1. Relaxed mixed constraint preconditioners forill-conditioned symmetric saddle point linear systemsWe develop efficient preconditioners for generalized saddle

point linear system Ax = b, where A =

[A B>

B −C

]and

A > 0, C ≥ 0 and B a full-rank rectangular matrix. Given twopreconditioners for A (PA and PA) and a preconditioner (PS)for S = BPA

−1B> + C, the Relaxed Mixed Constraint

Preconditioners (RMCP) is denoted byM−1(ω) where

M(ω) =

[I 0

BP−1A I

] [PA 00 −ωPS

] [I P−1

A B>

0 I

].

Eigenanalysis ofM−1(ω)A shows that the optimal ω is relatedto the (cheaply estimated) spectral radius of P−1

A A and P−1S S.

Results regarding large linear systems arising fromdiscretizations of geomechanical problems as well as fluid flowin porous media, show that proper choice of ω improvesconsiderably the MCP performance.Luca BergamaschiDipartimento di Ingegneria Civile Edile AmbientaleUniversity of Padova, [email protected]

Angeles MartınezDipartimento di MatematicaUniversity of Padova, [email protected]

Talk 2. Chebychev acceleration of iterative refinement

Gaussian elimination with partial pivoting followed by iterativerefinement can compute approximate solutions of linear systemsof equations that are backward stable. In some situations, thenumber of refinement steps can be large and their costprohibitive. Limiting the number of steps is particularlyimportant on multicore architectures where the solve phase of asparse direct solver can represent a bottleneck.We propose variants of the Chebyshev algorithm that can beused to accelerate the refinement procedure without loss ofnumerical stability. Numerical experiments on sparse problemsfrom practical applications corroborate the theory and illustratethe potential savings offered by Chebyshev acceleration.Jennifer ScottComputational Science & Engineering DepartmentScience and Technology Faculties Council, [email protected]

Mario ArioliComputational Science & Engineering DepartmentScience and Technology Faculties Council, [email protected]

Talk 3. Parallel deflated GMRES with the Newton basisThe GMRES iterative method is widely used as a Krylovsubspace accelerator for solving sparse linear systems when thecoefficient matrix is nonsymmetric. The Newton basisimplementation has been proposed for distributed memorycomputers as an alternative of the Arnoldi-based approach toavoid low-grained commmunications. The aim of our work hereis to introduce a modification based on deflation techniques.This approach builds an augmented subspace in an adaptive wayto accelerate the convergence of the restarted formulation. In ournumerical experiments, we show the benefits of using thisimplementation in the PETSc package with Schwarzpreconditioners for solving large CFD linear systems.Desire Nuentsa WakamINRIA, Bordeaux, Francedesire.nuentsa [email protected]

Jocelyne ErhelINRIA, Rennes, [email protected]

Talk 4. Rank-k updates of incomplete Sherman-MorrisonpreconditionersLet B = A+ PQT be a large and sparse matrix where A is anonsingular matrix and PQT is a rank−k matrix. In this workwe are interested in solving the updated linear systemBx = b bypreconditioned iterations. We study the problem of updating analready existing preconditioner M for the matrix A. In particularwe consider how to update the incomplete LU factorizationcomputed by the BIF algorithm (SIAM J. Sci. Comput. Vol.30(5), pp. 2302-2318, (2008)). The results of the numericalexperiments with different types of problems will be presented.Jose MarınInstituto de Matematica MultidisciplinarUniversidad Politecnica de ValenciaCamino de Vera s/n, 46022 Valencia, [email protected]

Juana CerdanInstituto de Matematica MultidisciplinarUniversidad Politecnica de ValenciaCamino de Vera s/n, 46022 Valencia, [email protected]

Jose MasInstituto de Matematica MultidisciplinarUniversidad Politecnica de Valencia

Page 59: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

59 2012 SIAM Conference on Applied Linear Algebra

Camino de Vera s/n, 46022 Valencia, [email protected]

MS 62. Solving ill-posed systems via signal-processingtechniques - Part II of IITalk 1. Effects of prox parameter selection strategies in exactand inexact first-order methods for compressed sensing andother composite optimization problemsWe will discuss theoretical and practical implications of variousstrategies for choosing the prox parameter in prox gradientmethods and related alternating direction methods. We will showextension of existing convergence rates for both accelerated andclassical first-order methods. Practical comparison based on atesting environment for L1 optimization will be presented.Katya ScheinbergDepartment of Industrial and Systems EngineeringLehigh [email protected]

Donald GoldfarbIEOR DepartmentColumbia [email protected]

Shiqian MaInstitute for Mathematics and its ApplicationsUniversity of [email protected]

Talk 2. An adaptive inverse scale space method forcompressed sensingIn this talk a novel adaptive approach for solving`1-minimization problems as frequently arising in compressedsensing is introduced, which is based on the recently introducedinverse scale space method. The scheme allows to efficientlycompute minimizers by solving a sequence of low-dimensionalnonnegative least-squares problems. Moreover, extensivecomparisons between the proposed method and the relatedorthogonal matching pursuit algorithm are presented.Martin BenningInstitute for Computational and Applied MathematicsUniversity of [email protected]

Michael MollerInstitute for Computational and Applied MathematicsUniversity of [email protected]

Pia HeinsInstitute for Computational and Applied MathematicsUniversity of [email protected]

Talk 3. CGSO for convex problems with polyhedralconstraintsConjugate Gradient with Subspace Optimization (CGSO) is avariant of conjugate gradient algorithm that achieves the optimalcomplexity bound of Nemirovski-Yudin’s algorithm for the classof strongly convex functions. In this talk we are extendingCGSO to constrained problems in which we are minimizing astrictly convex function over a convex polyhedron. We discussthe theoretical properties of CGSO for this class of problems aswell as its practical performance.Sahar KarimiDepartment of Combinatorics and OptimizationUniversity of [email protected]

Stephen Vavasis

Department of Combinatorics and OptimizationUniversity of [email protected]

Talk 4. Phase-retrieval using explicit low-rank matrixfactorizationRecently, Candes et al. proposed a novel methodology for phaseretrieval from magnitude information by formulating it as amatrix-completion problem. In this work we develop analgorithm aimed at solving large-scale instances of this problem.We take advantage of the fact that the desired solution is of rankone and use low-rank matrix factorization techniques to attainconsiderable speed-up over existing approaches. We considerphase recovery in both the noisy and noiseless setting and studyhow various design choices affect the performance andreliability of the algorithm.Ewout van den BergDepartment of MathematicsStanford [email protected]

Emmanuel CandesDepartments of Mathematics and StatisticsStanford [email protected]

MS 63. Max-algebra - Part II of IITalk 1. Three-dimensional convex polyhedra tropicallyspanned by four pointsIn this talk we show how a 3–dimensional convex polyhedronTA is obtained from a 4× 4 normal tropically idempotentinteger matrix A (i.e., A = (aij), aij ∈ Z≤0, aii = 0,AA = A, with ⊕ = max, = +).Which polyhedra arise this way? TA has 20 vertices and 12facets, at most. By Euler’s formula, the f–vector is (20, 30, 12),at most. Facets have 6 vertices, at most.We show that a polyhedron TA combinatorially equivalent to theregular dodecahedron does not occur, for any A, i.e., thepolygon–vector of TA cannot be (0, 0, 12, 0). Thepolygon–vector of TA cannot be (0, 1, 10, 1), either, but we haveexamples where the polygon–vector of TA is (0, 2, 8, 2). Weprovide families of matrices with TA having polygon–vector(0, 3, 6, 3). Finally, certain families of circulant matrices yieldTA with polygon–vector (0, 4, 4, 4) i.e., the facets of TA are 4quadrilaterals, 4 pentagons and 4 hexagons. Previous work hasbeen done by Joswig and Kulas, Develin and Sturmfels.Marıa Jesus de la PuenteFacultad de MatematicasUniversidad Complutense (UCM)[email protected]

Adrian JimenezFacultad de MatematicasUniversidad Complutense (UCM)[email protected]

Talk 2. Algorithmic problems in tropical convexityWe present recent advances in tropical computational geometry,including the fundamental problem of computing the vertices ofa tropical polyhedron described as intersection of half-spaces, orinversely. We also discuss the connection of these problems withhypergraph transversals, directed hypergraphs, and mean payoffgames. We finally point out applications of tropical convexity toother fields in computer science, such as formal verification.Xavier AllamigeonINRIA Saclay – Ile-de-France and CMAP

Page 60: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 60

Ecole Polytechnique, [email protected]

Stephane GaubertINRIA Saclay – Ile-de-France and CMAPEcole Polytechnique, [email protected]

Eric GoubaultCEA Saclay Nano-INNOVInstitut Carnot CEA LIST, DILS/MeASI, [email protected]

Ricardo D. KatzCONICET. Instituto de Matematica “Beppo Levi”Universidad Nacional de Rosario, [email protected]

Talk 3. On the weak robustness of interval fuzzy matricesA fuzzy matrix A (i.e. matrix in a (max, min)-algebra) is calledweakly robust if every orbit sequence of A with starting vector xcontains no eigenvectors, unless x itself is an eigenvector of A.Weak robustness is extended to interval fuzzy matrices and theirproperties are studied. A characterization of weakly robustinterval fuzzy matrices is described and a quadratic algorithm forchecking the weak robustness of a given interval fuzzy matrix ispresented.Jan PlavkaDepartment of Mathematics and Theoretical InformaticsTechnical University in [email protected]

Martin GavalecDepartment of Information TechnologiesUniversity of Hradec [email protected]

Talk 4. Weakly stable matricesGiven a square matrix A and a vector x the sequenceAk ⊗ x

∞k=0

in the max-plus algebra is called the orbit of Awith starting vector x. For some matrices the orbit never reachesan eigenspace unless it starts in one; such matrices are calledweakly stable. We will characterise weakly stable matrices bothin the reducible and irreducible case. It turns out that irreducibleweakly stable matrices are exactly matrices whose critical graphis a Hamiltonian cycle in the associated graph. This talk is basedon joint work with S. Sergeev and H. Schneider.Peter ButkovicSchool of MathematicsUniversity of [email protected]

MS 64. Eigenvalue perturbations and pseudospectra -Part II of IITalk 1. Ritz value localization for non-Hermitian matricesThe Ritz values of Hermitian matrices have long been wellunderstood, thanks to the Cauchy Interlacing Theorem. Recentprogress has begun to uncover similar properties for Ritz valuesof non-Hermitian matrices. For example, the “inverse field ofvalues problem” asks whether a set of k points in the field ofvalues can be Ritz values from a k dimensional subspace. Wesurvey results on this problem, describe how majorization canlead to Ritz value containment regions, and provide a detailedanalysis for a Jordan block.Mark EmbreeDepartment of Computational and Applied MathematicsRice [email protected]

Russell Carden

Department of Computational and Applied MathematicsRice [email protected]

Talk 2. Optimization of eigenvalues of Hermitian matrixfunctionsThis work concerns a Hermitian matrix function depending onits parameters analytically. We introduce a numerical algorithmfor the global optimization of a specified eigenvalue of such aHermitian matrix function. The algorithm is based onconstructing piece-wise quadratic under-estimators for theeigenvalue function, and finding global minimizers of thesequadratic models. In the multi-dimensional case finding theglobal minimizers of the quadratic models is equivalent tosolving quadratic programs. The algorithm generates sequencesconverging to global optimizers (linearly in practice). Theapplications include the H-infinity norm of a linear system, andthe distance from a matrix to defectiveness.Emre MengiDept. of MathematicsKoc [email protected]

Mustafa KılıcDept. of MathematicsKoc [email protected]

E. Alper YıldırimDepartment of Industrial EngineeringKoc [email protected]

Talk 3. Algorithms for approximating the H∞ normH∞ norm methods are used in control theory to design optimalcontrollers. The controllers are designed so as to minimize theH∞ norm of the n× n closed-loop transfer matrix where n isthe system order. This optimization procedure necessitatesefficient methods for the computation of the H∞ norm itself.Existing methods compute the H∞ norm accurately but the costis multiple singular value decompositions and eigenvaluedecompositions of size n, making them impractical when n islarge. We present a novel method which provides a fastcomputation of the H∞ norm for large and sparse matrices, suchas the ones arising in the control of PDE’s. The method is anested fixed point iteration, where the outer iteration is a Newtonstep and the inner iteration is associated with the problem of thecomputation of the ε-pseudospectral abscissa, i.e. the real part ofa righmost point of the ε-pseudospectrum of a certain linearoperator. The fixed points of the iteration are characterized, locallinear convergence of the algorithm for small enough ε is givenand some applications to the control of PDE’s are discussed.

Michael L. OvertonCourant Institute, New York [email protected]

Mert GurbuzbalabanCourant Institute, New York [email protected]

Nicola GuglielmiUniversity of L’[email protected]

Talk 4. Reduced basis methods for computing pseudospectralquantitiesReduced basis methods have been developed to efficiently solveparameter-dependent partial differential equations and, after aspatial discretization, the resulting linear systems. They typically

Page 61: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

61 2012 SIAM Conference on Applied Linear Algebra

consist of two phases. In an offline phase, the linear system issolved for several parameter samples and a low-dimensionalsubspace containing these solutions (approximately) isconstructed. In an online phase, only the compression of thelinear system with respect to this subspace needs to be solved,leading to greatly reduced execution times. The effectivity ofreduced basis methods crucially depends on the regularity of theparameter dependence and the chosen sampling strategy.In this talk, we show how an approach inspired by reduced basismethods can be used to compute pseudospectra and associatedquantities of a matrix A. Instead of solving aparameter-dependent linear system, one needs to consider thecomputation of the singular vector(s) belong to the smallestsingular value(s) of A− zI for all values of the complexparameter z of interest. While the computation of pseudospectraitself is still under development, we show how these techniquecan already be used to speed up a recently proposed algorithmby Gulgielmi and Overton for computing the pseudospectralabscissa.Daniel KressnerMATHICSE-ANCHPEPF [email protected]

MS 65. Numerical linear algebra and optimization inimaging applications - Part II of IITalk 1. A recursion relation for solving L-BFGS systems withdiagonal updatesWe investigate a formula to solve limited-memory BFGSquasi-Newton Hessian systems with full-rank diagonal updates.Under some mild conditions, the system can be solved via arecursion that uses only vector inner products. This approach hasbroad applications in trust region and barrier methods inlarge-scale optimization.Jennifer B. ErwayDept. of MathematicsWake Forest [email protected]

Roummel F. MarciaSchool of Natural SciencesUniversity of California, [email protected]

Talk 2. Wavefront gradients reconstruction using l1 − lpmodelsImages of objects in outer space acquired by ground-basedtelescopes are usually blurred by atmospheric turbulence. Toimprove the quality of these images, the wavefront of the light isutilized to derive the point spread function (PSF). We proposedthe l1 − lp (p = 1, 2) model for reconstructing the wavefrontgradients and hence the wavefront itself. The model can give amore accurate PSF and therefore better restored images.Numerical results are given to illustrate the performance of theproposed models.Raymond H. ChanDept. of MathematicsThe Chinese University of Hong [email protected]

Xiaoming YuanDept. of MathematicsHong Kong Baptist [email protected]

Wenxing Zhang

Dept. of MathematicsNanjing [email protected]

Talk 3. Edge-preserving image enhancement via blinddeconvolution and upsampling operatorsWe consider the super-resolution model introduced by S.J. Osherand A. Marquina in J. Sci. Comput., 2008 volume 37, 367382,consisting of the solution of a total-variation based variationalproblem that solves a linear degradation model involving aconvolution operator and a down- sampling operator. In thisresearch work we explore different edge preservingup/down-sampling operators with different orders of spatialaccuracy and estimated convolution operators to removeunknown blur from degraded low resolved images to solve thisgenuine inverse problem. Some numerical examples areprovided to show the features of the proposed algorithm.Antonio MarquinaDepartment of Applied Mathematics,University of Valencia, [email protected]

Stanley J. OsherDepartment. of Mathematics,UCLA, [email protected]

Shantanu H. JoshiLONI,UCLA, [email protected]

Talk 4. A new hybrid-optimization method for large-scale,non-negative, full regularizationWe present a new method for solving the full regularizationproblem of computing both the regularization parameter and thecorresponding solution of a linear or nonlinear ill-posedproblem. The method is based on stochastic and fast,gradient-based optimization techniques, and computesnon-negative solutions to `1 or `2 regularization problems. Wedescribe the method and present numerical results for large-scaleimage restoration problems.Marielba RojasDelft Institute of Applied MathematicsDelft University of Technology, The [email protected]

Johana GuerreroDepartment of Computer ScienceUniversity of Carabobo, [email protected]

Marcos RaydanDepartment of Scientific Computing and StatisticsSimon Bolıvar University, [email protected]

MS 66. Parametric eigenvalue problems - Part II of IITalk 1. A subspace optimization technique for the generalizedminimal maximal eigenvalue problem.Consider the Generalized Minimal Maximal EigenvalueProblem:

λ∗max := minx

maxλ,~v

λ

st :

K(x)~v = λM(x)~v,

x ∈ H ⊂ Rp,~v ∈ Rn

Page 62: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 62

where the matrices K(x) and M(x) are affine matrix functions,creating a symmetric positive definite pencil (M(x),M(x)) onthe hypercubeH. It is a quasi-convex, non-smooth optimizationproblem.We present a subspace iteration method for computing theminimal maximal eigenvalue of a large scale eigenvalueproblem. The idea is based on Kelley’s Classical cutting planemethod for convex problems. In each iteration λ1 and acorresponding eigenvector ~v are computed for a specific x. Theeigenvector ~v is then used to expand the subspace and the nextiterate x is determined from the minimal maximum eigenvalueof the small scale projected problem. Convergence theory of themethod relies on the fact that the maximal Ritz value is a supportto the maximal eigenvalue.Jeroen De VliegerDept. of Computer [email protected]

Karl Meerbergen (promotor)Dept. of Computer [email protected]

Talk 2. An iterative method for computing pseudospectralabscissa and stability radii for nonlinear eigenvalueproblemsWe consider the following class of nonlinear eigenvalueproblems, (

m∑i=1

Aipi(λ)

)v = 0,

where A1, . . . , Am are given n× n matrices and the functionsp1, . . . , pm are assumed to be entire. This does not only includepolynomial eigenvalue problems but also eigenvalue problemsarising from systems of delay differential equations. Our aim isto compute the ε-pseudospectral abscissa, i.e. the real part of therightmost point in the ε-pseudospectrum, which is the complexset obtained by joining all solutions of the eigenvalue problemunder perturbations δAimi=1, of norm at most ε, of the matricesAimi=1.In analogy to the linear eigenvalue problem we prove that it issufficient to restrict the analysis to rank-1 perturbations of theform δAi = βiuv

∗ where u ∈ Cn and v ∈ Cn with βi ∈ C forall i. Using this main - and unexpected - result we present newiterative algorithms which only require the computation of thespectral abscissa of a sequence of problems obtained by addingrank one updates to the matrices Ai. These provide lowerbounds to the pseudspectral abscissa and in most cases convergeto it. A detailed analysis of the convergence of the algorithms ismade.The methods available for the standard eigenvalue problem inthe literature provide a robust and reliable computation but at thecost of full eigenvalue decompositions of order 2n and singularvalue decompositions, making them unfeasible for largesystems. Moreover, these methods cannot be generalized tononlinear eigenvalue problems, as we shall explain. Therefore,the presented method is the first generally applicable method fornonlinear problems. In order to be applied it simply requires aprocedure to compute the rightmost eigenvalue and thecorresponding left and right eigenvectors. In addition, if thematrices Ai are large and sparse then the computation of therightmost eigenvalue can for many classes of nonlinear

eigenvalue problems be performed in an efficient way byiterative algorithms which only rely on matrix vectormultiplication and on solving systems of linear equations, wherethe structure of the matrices (original sparse matrices plus rankone updates) can be exploited. This feature, as well otherproperties of the presented numerical methods, are illustrated bymeans of the delay and polynomial eigenvalue problem.Wim MichielsDepartment of Computer ScienceKU [email protected]

Nicola GuglielmiDipartimento di Matematica Pura e ApplicataUniversita dell’[email protected]

Talk 3. Statistical pseudospectrum and eigenvalue robustnessto rank-one disturbanceIn this talk, we present a new statistical measure of therobustness (or sensitivity) of linear dynamic systems to rank-onerandom disturbances. The sensitivity assessment is a statisticalpseudospectrum: given the probability distribution of therandom disturbance magnitude, it measures the expectedfrequency region where the system eigenvalues are located . Wediscuss the properties of the robustness and sensitivity measure.We notably stress the existence of an invariant of the measurethat consequently shows that under certain conditions, the rate ofincrease of a rank-one pseudospectrum area is constant as afunction of the disturbance magnitude.Christophe LecomteInstitute of Sound and Vibration Research (ISVR) andSouthampton Statistical Sciences Research Institute (S3RI)University of [email protected]

Maryam Ghandchi TehraniInstitute of Sound and Vibration Research (ISVR) andUniversity of SouthamptonUniversity Road, [email protected]

MS 67. Structured matrix computations - Part II of IITalk 1. Randomized numerical matrix computations withapplicationsIt is long and well known that random matrices tend to be wellconditioned, but exploitation of this phenomenon in numericalmatrix computations is more recent and there are still variousnew directions to investigate. Some of them will be covered inthis talk. This includes pivoting-free but safe Gaussianelimination, randomized preconditioning of linear systems ofequations, computation of a basis for the null space of a singularmatrix, approximation by low-rank matrices, and applications topolynomial root-finding.Victor PanDepartment of Mathematics and Computer ScienceLehman College - City University of New YorkBronx, New York [email protected]

Talk 2. Massively parallel structured direct solver for theequations describing time-harmonic seismic wavesWe consider the discretization and approximate solutions ofequations describing time-harmonic seismic waves in 3D. Wediscuss scalar qP polarized waves, and multi-component elasticwaves in inhomogeneous anisotropic media. The anisotropy

Page 63: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

63 2012 SIAM Conference on Applied Linear Algebra

comprises general (tilted) TI and orthorhombic symmetries. Weare concerned with solving these equations for a large number ofdifferent sources on a large domain. We consider variable orderfinite difference schemes, to accommodate anisotropy on the onehand and allow higher order accuracy – to control sampling ratesfor relatively high frequencies – on the other hand.We make use of a nested dissection based domaindecomposition, with separators of variable thickness, andintroduce an approximate direct (multifrontal) solver bydeveloping a parallel Hierarchically SemiSeparable (HSS)matrix compression, factorization, and solution approach. Inparticular, we present elements of the following new parallelalgorithms and their scalability: The parallel construction of anHSS representation or approximation for a general dense matrix,the parallel ULV factorization of such a matrix, and the parallelsolution with multiple right-hand sides. The parallel HSSconstruction consists of three phases: Parallel rank revealing QR(RRQR) factorization based on a Modified Gram-Schmidt(MGS) method with column pivoting, parallel row compression,and parallel column compression. The parallel HSS factorizationinvolves the use of two children’s contexts for a given parentcontext. The communication patterns are composed ofintra-context and inter-context ones. Similar strategies areapplied to the HSS solution.We present various examples illustrating the performance of ouralgorithm as well as applications in so-called full waveforminversion.Maarten V. de HoopDepartment of Mathematics and Department of Earth and AtmosphericSciencesPurdue UniversityWest Lafayette, Indiana 47907, [email protected]

Shen WangDepartment of Mathematics and Department of Earth and AtmosphericSciencesPurdue UniversityWest Lafayette, Indiana 47907, [email protected]

Jianlin XiaDepartment of MathematicsPurdue UniversityWest Lafayette, Indiana 47907, [email protected]

Xiaoye S. LiComputational Research DivisionLawrence Berkeley National LaboratoryBerkeley, CA [email protected]

Talk 3. On the conditioning of incomplete Choleskyfactorizations with orthogonal droppingWe consider incomplete Cholesky factorizations based onorthogonal dropping for the iterative solution of symmetricpositive definite linear systems. These methods becomeincreasingly popular tools for computing an approximatefactorization of large dense matrices, including update matricesand Schur complements that arise in spares solvers. For thesystem preconditioned with these incomplete factorizations wepresent an upper bound on the condition number which onlydepends on the accuracy of the individual approximation(dropping) steps. The analysis is illustrated with some existingfactorization algorithms in the context of discretized ellipticpartial differential equations.

Artem NapovComputational Research DivisionLawrence Berkeley National LaboratoryBerkeley, CA 94720, [email protected]

Talk 4. Randomized direct solversWe propose some new structured direct solvers for large linearsystems, using randomization and other techniques. Our workinvolves new flexible methods to exploit structures in largematrix computations. Our randomized structured techniquesprovide both higher efficiency and better applicability than someexisting structured methods. New efficient ways are proposed toconveniently perform various complex operations which aredifficult in standard rank-structured solvers. Extension of thetechniques to least squares problems and eigenvalue problemswill also be shown.We also study the following issues:

1. Develop matrix-free structured solvers.2. Update a structured factorization when few matrix entries

change.3. Relaxed rank requirements in structured solvers. We

show the feasibility of our methods for solving variousdifficult problems, especially high dimensional ones.

4. Develop effective preconditioners for problems withoutsignificant rank structures. We analyze the criterion forcompressing off-diagonal blocks so as to achieve nearlyoptimal effectiveness and efficiency in our preconditioner.

Jianlin XiaDepartment of MathematicsPurdue UniversityWest Lafayette, Indiana 47907, [email protected]

Maarten V. de HoopDepartment of Mathematics and Department of Earth and AtmosphericSciencesPurdue UniversityWest Lafayette, Indiana 47907, [email protected]

Xiaoye S. LiComputational Research DivisionLawrence Berkeley National LaboratoryBerkeley, CA [email protected]

Shen WangDepartment of Mathematics and Department of Earth and AtmosphericSciencesPurdue UniversityWest Lafayette, Indiana 47907, [email protected]

MS 68. Linear algebra for structured eigenvaluecomputations arising from (matrix) polynomialsTalk 1. A QR algorithm with generator compression forstructured eigenvalue computationIn this talk we present a new structured implicit QR algorithmfor fast computation of matrix eigenvalues. The algorithm,which relies on the properties of quasiseparable structure,applies to unitary-plus-rank-one Hessenberg matrices and, inparticular, to Frobenius companion matrices. It computes theeigenvalues of an n× n matrix in O(n2) operations with O(n)memory. The introduction of a generator compression stepallows to reduce complexity – without spoiling accuracy – with

Page 64: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 64

respect to a previous fast eigensolver for companion matrices([Bini, Boito, Eidelman, Gemignani, Gohberg, LAA 2010]).Numerical results will be presented for the single- and doubleshift strategies.Paola BoitoXLIM-DMIUniversity of [email protected]

Yuli EidelmanSchool of Mathematical SciencesRaymond and Beverly Sackler Faculty of Exact SciencesTel-Aviv [email protected]

Luca GemignaniDept. of MathematicsUniversity of [email protected]

Israel GohbergSchool of Mathematical SciencesRaymond and Beverly Sackler Faculty of Exact SciencesTel-Aviv University

Talk 2. Quadratic realizability for structured matrixpolynomialsWhich lists of elementary divisors L can be realized by somequadratic matrix polynomial Q(λ)? For regular Q(λ) over thefield C, this problem has recently been solved by severalresearchers. Indeed, if L can be realized at all by some regular Qover C, then it can always be realized by some upper triangularQ; several independent proofs are now known. This talk focuseson the analogous question for structured matrix polynomials. IfS is a class of structured polynomials, which elementary divisorlists L can be realized by some quadratic Q(λ) in S? We surveycurrent progress on this problem for various structure classes S,including palindromic, alternating, and Hermitian matrixpolynomials. As time permits we will also consider the quadraticrealizability of additional features, such as sign characteristicsand minimal indices, for these same structure classes.D. Steven MackeyDept. of MathematicsWestern Michigan [email protected]

Fernando De TeranDepto. de MatematicasUniversidad Carlos III de [email protected]

Froilan M. DopicoICMAT/Depto. de MatematicasUniversidad Carlos III de [email protected]

Francoise TisseurSchool of MathematicsThe University of [email protected]

Talk 3. Fast computation of zeros of a polynomialThe usual method for computing the zeros of a polynomial is toform the companion matrix and compute its eigenvalues. Inrecent years several methods that do this computation in O(n2)time with O(n) memory by exploiting the structure of thecompanion matrix have been proposed. We propose a newmethod of this type that makes use of Fiedler’s factorization of acompanion matrix into a product of n− 1 essentially 2× 2matrices. Our method is a non-unitary variant of Francis’simplicitly-shifted QR algorithm that preserves this structure. As

a consequence the memory requirement is a very modest 4n, andthe flop count is O(n) per iteration (and O(n2) overall). We willpresent numerical results and compare our method with othermethods.David S. WatkinsDepartment of MathematicsWashington State [email protected]

Jared L. AurentzDepartment of MathematicsWashington State [email protected]

Raf VandebrilDepartment of Computer ScienceK. U. [email protected]

Talk 4. Eigenvector recovery of linearizations and thecondition number of eigenvalues of matrix polynomialsThe standard formula for the condition number of a simpleeigenvalue of a matrix polynomial involves the left and righteigenvectors associated with the eigenvalue [F. TISSEUR,Backward error and condition of polynomial eigenvalueproblems, Linear Algebra Appl., 309 (2000) 339–361]. Theusual way to solve polynomial eigenvalue problems is by usinglinearizations. In the past few years, different families oflinearizations have been introduced. In order to compare thecondition number for the eigenvalues of these linearizations, weneed formulas for the associated eigenvectors. We present in thistalk formulas for the eigenvectors of several families ofFiedler-lyke linearizations. These formulas are introduced usingthe notion of eigencolumn, which allows us to relate theeigenvectors of the linearizations with the eigenvectors of thepolynomial. This fact may allow also to compare the conditionnumber of the eigenvalue in the polynomial with the conditionnumber of the eigenvalue in the linearizations. Moreover, the useof eigencolumns allows us to express similar formulas forminimal bases of singular polynomials.Fernando De TeranDepto. de MatematicasUniversidad Carlos III de [email protected]

Maria I. BuenoDept. of MathematicsUniversity of California at Santa [email protected]

Froilan M. DopicoICMAT/Depto. de MatematicasUniversidad Carlos III de [email protected]

D. Steven MackeyDept. of MathematicsWestern Michigan [email protected]

MS 69. Advances in sparse matrix factorizationTalk 1. A sparse inertia-revealing factorizationWe show how to apply Wilkinson’s inertia-revealingfactorization to sparse matrices in a way that preserves sparsity.No other inertia-revealing factorization is guaranteed to preservesparsity. The input matrix A is factored row by row, therebyproducing the triangular factors of all its leading blocks. theinertia is derived from the number of sign changes in thesequence of determinants (revealed by the factors) of these

Page 65: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

65 2012 SIAM Conference on Applied Linear Algebra

blocks. We show that the fill in the triangular factors is boundedby the fill in the QR factorization of A. Therefore,symmetrically pre-permuting A to a doubly-bordered formguarantees sparsity in our algorithm.Alex DruinskySchool of Computer ScienceTel-Aviv [email protected]

Sivan ToledoSchool of Computer ScienceTel-Aviv [email protected]

Talk 2. Multifrontal factorization on heterogeneous multicoresystemsWhen solving the sparse linear systems that arise in MCAEapplications, the multifrontal method is particularly attractive asit transforms the sparse matrix factorization into an eliminationtree of dense matrix factorizations. The vast majority of thefloating point operations can be performed with calls to highlytuned BLAS3 routines, and near peak throughput is expected.Such computations are performed today on clusters of multicoremicroprocessors, often accelerated by by graphics processingunits (GPUs). This talk discusses how concurrency in themultifrontal computation is processed with message passing(MPI), shared memory (OpenMP), and GPU accelerators(CUDA), exploiting the unique strengths of each.Bob LucasInformation Sciences InstituteUniversity of Southern [email protected]

Roger GrimesLivermore Software Technology [email protected]

John TranInformation Sciences InstituteUniversity of Southern [email protected]

Gene WagenbrethInformation Sciences InstituteUniversity of Southern [email protected]

Talk 3. Towards an optimal parallel approximate sparsefactorization algorithm using hierarchically semi-separablestructuresHierarchically semiseparable (HSS) matrix algorithms areemerging techniques in constructing the superfast direct solversfor both dense and sparse linear systems. We present a set ofnovel parallel algorithms for the key HSS operations that areneeded for solving large linear systems, including the parallelrank-revealing QR factorization, the HSS constructions withhierarchical compression, the ULV HSS factorization, and theHSS solutions. We have appplied our new parallelHSS-embedded multifrontal solver to the anisotropic Helmholtzequations for seismic imaging, and were able to solve a linearsystem with 6.4 billion unknowns using 4096 processors, inabout 20 minutes.Xiaoye S. LiLawrence Berkeley National [email protected]

Shen WangPurdue [email protected]

Jianlin Xia

Purdue [email protected]

Maarten V. de HoopPurdue [email protected]

Talk 4. Improving multifrontal methods by means oflow-rank approximation techniquesBy a careful exploitation of the low-rank property of discretizedelliptic PDEs, a substantial reduction of the flops and memoryconsumption can be achieved for many linear algebra operations.In this talk, we present how low-rank approximations can beused to significantly improve a sparse multifrontal solver. Weintroduce a blocked, low-rank storage format for compressingfrontal matrices and compare it to the HSS storage format.Finally, we present experimental results showing the reductionof flops and memory footprint achieved on the solution of largescale matrices from applications such as the acoustic waveequation and thermo-mechanics.Clement WeisbeckerINPT(ENSEEIHT)-IRITUniversity of [email protected]

Patrick AmestoyINPT(ENSEEIHT)-IRITUniversity of [email protected]

Cleve AshcraftLSTC, [email protected]

Olivier BoiteauEDF R&D, [email protected]

Alfredo [email protected]

Jean-Yves L’ExcellentINRIA-LIP(ENS Lyon)[email protected]

MS 70. Accurate algorithms and applicationsTalk 1. High-precision and accurate algorithms in Physics andMathematicsIn this talk we present a survey of recent applications in Physicsand Mathematics where high level of numeric precision isrequired. Such calculations are facilitated, on one hand, byhigh-precision software packages that include high-levellanguage translation modules to minimize the conversion effort,and on the other hand, by the use of theoretical error bounds andaccurate algorithms when available. These applications includesupernova simulations, planetary orbit calculations, Coulombn-body atomic systems, scattering amplitudes of quarks, gluonsand bosons, nonlinear oscillator theory, experimentalmathematics, evaluation of recurrence relations, numericalintegration of ODEs, computation of periodic orbits, studies ofthe splitting of separatrices, detection of strange non-chaoticattractors, Ising theory, quantum field theory, and discretedynamical systems. We conclude that high-precision arithmeticfacilities are now an indispensable component of a modernlarge-scale scientific computing environment.Roberto BarrioDept. Matematica Aplicada and IUMA,Universidad de Zaragoza, [email protected]

Page 66: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 66

David H. BaileyLawrence Berkeley National Laboratory,Berkeley, [email protected]

Jonathan M. BorweinCARMA, University of Newcastle,Callaghan, [email protected]

Sergio SerranoDept. Matematica Aplicada and IUMA,Universidad de Zaragoza, [email protected]

Talk 2. Accurate evaluation of 1D and 2D polynomials inBernstein formIn this talk we present some fast and accurate algorithms toevaluate the 1D and 2D polynomials expressed in the Bernsteinform in CAGD. As a well-known and stable algorithm, DeCasteljau algorithm may be still less accurate than expectedowing to cancelation in some circumstances. Our compensatedalgorithms, applying error-free transformation, can yield a resultas accurate as if computed by the De Casteljau algorithm intwice working precision. Numerical tests illustrate that ouralgorithms can run significantly faster than the De Casteljaualgorithm using double-double library.Hao JiangPEQUAN team, LIP6,Universite of Pierre et Marie [email protected]

Roberto BarrioDept. Matematica Aplicada and IUMA,Universidad de Zaragoza, [email protected]

Talk 3. Some issues related to double roundingsDouble rounding is a phenomenon that may occur whendifferent floating-point precisions are available on a samesystem, or when performing scaled operations whose final resultis subnormal. Although double rounding is, in general,innocuous, it may change the behavior of some useful smallfloating-point algorithms. We analyze the potential influence ofdouble roundings on the Fast2Sum and 2Sum algorithms, onsome summation algorithms, and Veltkamp’s splitting. We alsoshow how to handle possible double roundings when performingscaled Newton-Raphson division iterations (to avoid possibleunderflow problems).Jean-Michel MullerCNRS, Laboratoire LIPUniversite de Lyon, [email protected]

Erik Martin-DorelEcole Normale Superieure de Lyon, Laboratoire LIPUniversite de Lyon, [email protected]

Guillaume MelquiondINRIA, Proval team, LRIUniversite Paris Sud, [email protected]

Talk 4. Error bounds for floating-point summation and dotproductThe sum and dot product of vectors of floating-point numbersare ubiquitous in numerical calculations. Since four decades theerror is bounded by the classical Wilkinson estimates. However,those contain some nasty denominator covering higher orderterms. In this talk we show that the latter can be omitted. A keyto the (mostly) simple proofs is our ufp-concept denoting the

”unit in the first place”. In contrast to the well-known ulp (unitin the last place) it is defined for real numbers and allows sharp,simple and nice error estimates for floating-point operations.The practical relevance of the new estimates is limited; however,they are aesthetically pleasing and confirm that it is true whatone may (hope or) expect.Siegfried M. RumpInstitute for Reliable ComputingHamburg University of TechnologyHamburg, Germany, andFaculty of Science and EngineeringWaseda University, Tokyo, [email protected]

MS 71. Theoretical and applied aspects of graphLaplaciansTalk 1. Potential theory for perturbed Laplacian of finitenetworksGiven a symmetric and irreducible M–matrix M, theoff-diagonal entries of M can be identified with the conductancefunction of a connected network Γ. In particular the matrixobtained by choosing di =

∑nj=1,j 6=i cij , where n is the order

of the network, is nothing but the combinatorial Laplacian of thenetwork. Therefore, any matrix with off-diagonal values givenby −cij can be considered as a perturbed Laplacian of ΓFrom the operator theory point of view, the perturbed Laplaciansare identified with the so-called discrete Schrodinger operatorsof the network Γ. Our main objective is the study of positivesemi–definite Schrodinger operators. In fact, many of ourtechniques and results appear as the discrete counterpart of thestandard treatment of the resolvent of elliptic operators onRiemannian manifolds. Joint work with E. Bendito, A. Carmonaand A.M. EncinasMargarida MitjanaDep. Matemtica Aplicada IUniversitat Politecnica de [email protected]

Enrique BenditoDep. Matematica Aplicada IIIUniversitat Politecnica de [email protected]

Angeles CarmonaDep. Matematica Aplicada IIIUniversitat Politecnica de [email protected]

Andres M. EncinasDep. Matematica Aplicada IIIUniversitat Politecnica de [email protected]

Talk 2. Subclasses of graphs with partial ordering withrespect to the spectral radius of generalized graphLaplacians.Brualdi and Solheid (1986) proposed the following generalproblem, which became a classical problem in spectral graphtheory: “Given a set G of graphs, find an upper bound for thespectral radius in this set and characterize the graphs in whichthe maximal spectral radius is attained.” Now there existsextensive literature that characterizes such extremal graphs forquite a couple of such sets. Moreover, the problem has beengeneralized to the (signless) Laplacian matrix of graphs andthere even exist a few contributions that provide results fornon-linear generalizations of these matrices, such as thep-Laplacian.

Page 67: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

67 2012 SIAM Conference on Applied Linear Algebra

Many of the proofs for these results apply graph perturbationsthat increase or decrease the spectral radius. In graph classes,like trees, one may eventually arrive at a graph with maximumor minimum spectral radius. As a by-product we get a partialordering of graphs in such classes. This procedure may alsowork for generalized graph Laplacians (symmetric matrices withnon-positive off-diagonal entries) like Dirichlet matrices.In this talk we want to find a general framework for derivingsuch results. We present sets of graphs and generalizedLaplacians where this procedure can be applied.This is a joint work with Turker Bıyıkoglu.Josef LeydoldInstitute for Statistics and MathematicsWU Vienna University of Economics and [email protected]

Turker BıyıkogluDepartment of MathematicsIsık University,[email protected]

Talk 3. Some new results on the signless Laplacian of graphsSince recently the signless Laplacian spectrum has attractedmuch attention in the literature. In this talk we will put focus onsome new results about the signless Laplacian spectrum whichare inspired by the results so far established for the adjacency orthe Laplacian spectrum.Slobodan K. SimicMathematical Institute of Serbian Academy of Science and [email protected]

Talk 4. Graph bisection from the principal normalizedLaplacian eigenvectorGraph bisection is the most often encountered form of graphpartitioning, which asks to divide the nodes of a network intotwo non-overlapping groups such that the number of linksbetween nodes in different groups is minimized. However, graphbisection may also be understood as the search for the largestbipartite subgraph in the complement: if two groups of G shouldhave sizes roughly equal to n/2, and if m1 and m2 are thenumbers of intra-group links and inter-group links, respectively,then the numbers of intra-group and inter-group links in thecomplement GC will be roughly equal to n2/4m1 and n2/4m2.Hence, the request to minimize m2 translates into the request ofmaximizing n2/4m2.Here we may relate to a well-known property of the spectrum ofthe normalized Laplacian matrix L∗ of G, saying that G isbipartite if and only if 2 is the largest eigenvalue of L∗. If G isbipartite, then the signs of the components of the eigenvectorcorresponding to eigenvalue 2 of L∗, yield the bipartition of G.In the more interesting case when G is not bipartite, one maystill expect that the signs of the components of the eigenvectorcorresponding to the largest eigenvalue of L∗ will yield a largebipartite subgraph of G. For example, it yields a perfectclassification of club members in the Zacharys karate clubnetwork. More interestingly, the sizes of the bipartite subgraphsobtained in this way in instances of random networks show highsimilarity to those obtained by applying the old Erdos method,an iterative process which starts with a random partition in twogroups and, as long as there exists a node having more than halfof its neighbors in its own group, it moves such node to the othergroup. This similarity is explored in more detail in this talk.Dragan StevanovicUniversity of Primorska, UP IAM, Koper Solvenia and

University of Nis, [email protected]

MS 72. Linear techniques for solving nonlinearequationsTalk 1. A Gauss-Seidel process in iterative methods forsolving nonlinear equationsIn this talk, we present a process named “Gauss-Seidelization”for solving nonlinear equations . It is an iterative process basedon the well-known Gauss-Seidel method to numerically solve asystem of linear equations. Together with some convergenceresults, we show several numerical experiments in order toemphasize how the Gauss-Seidelization process influences onthe dynamical behaviour of an iterative method for solvingnonlinear equations.Jose M. GutierrezDept. Mathematics and Computer SciencesUniversity of La RiojaLogrono, [email protected]

A. MagrenanDept. Mathematics and Computer SciencesUniversity of La RiojaLogrono, [email protected]

J. L. VaronaDept. Mathematics and Computer SciencesUniversity of La RiojaLogrono, [email protected]

Talk 2. A greedy algorithm for the convergence of a fractionalblind deconvolutionIn this talk we present new results on a greedy algorithm to studythe convergence of the Fractional Blind Deconvolution based onfractional powers of the laplacian. Greedy algorithms perform atheoretical background to prove the convergence in a HilbertSpace. We will show the theoretical results and an application tobaroque paintings.Pantaleon David Romero Sanchez

Departamento de Ciencias Fısicas, Matematicas y de la ComputacionUniversidad CEU-Cardenal [email protected]

Vicente F. CandelaDepartamento de Matematica AplicadaUniversidad de [email protected]

Talk 3. Overview of iterative methods using a variationalapproachIn this talk, we introduce a general framework for theapproximation of nonlinear equations in Banach spaces. Weadapt a variational perspective recently introduced for theanalysis of differential equations. In this new approach, someclassical iterative methods, including their convergence analysis,can be obtained. The method can be considered as ageneralization of the discrete least squares method, which is astandard approach to the approximate solution of many types ofproblems including overdetermined systems of nonlinearequations.S. BusquierDept. of Applied Mathematics and StatisticsU.P. [email protected]

S. Amat

Page 68: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 68

Dept. of Applied Mathematics and StatisticsU.P. [email protected]

P. PedregalE.T.S. Ingenieros Industriales.Universidad de Castilla La [email protected]

Talk 4. Iterative methods for ill-conditioned problemsIn this talk we study some features related to ill-conditionednonlinear equations. A strategy to choose iterative methods forsolving these equations is developed. In particular, we analyze avariation of Newton method, the so called perturbed Newtonmethod. Important features to achieve good performance, suchas choice of pivots and parameters provide degrees of freedomthat can be suited for improvement of the method. We illustratethis analysis through examples.Rosa M. Peris

Dept. of Applied MathematicsUniversity of [email protected]

Vicente F. CandelaDept. of Applied MathematicsUniversity of [email protected]

MS 73. Algebraic Riccati equations associated withM-matrices: numerical solution and applicationsTalk 1. Monotone convergence of Newton-like methods forM-matrix algebraic Riccati equationsThe minimal nonnegative solution of an M-matrix algebraicRiccati equation can be obtained by Newton’s method. Here westudy Newton-like methods that have higher-order convergenceand are not much more expensive each iteration, and are thusmore efficient than Newton’s method. For the Riccati equation,these Newton-like methods are actually special cases of theNewton–Shamanskii method. We show that, starting with zeroinitial guess or some other suitable initial guess, the sequencegenerated by the Newton–Shamanskii method convergesmonotonically to the minimal nonnegative solution.Chun-Hua GuoDept. of Mathematics and StatisticsUniversity of [email protected]

Talk 2. Accurate solution of M -matrix algebraic Riccatiequation by ADDA: alternating-directional doublingalgorithmIt is known that that an M -matrix Algebraic Riccati Equation(MARE) XDX −AX −XB + C = 0 has a unique minimalnonnegative solution Φ. In this talk, we will discuss two recentdevelopments:

• a relative perturbation theory that will show that smallrelative perturbations to the entries of A, B, C, and Dintroduce small relative changes to the entries of thenonnegative solution Φ, unlike the existing perturbationtheory for (general) Algebraic Riccati Equations.

• an efficient Alternating-Directional Doubling Algorithm(ADDA) that can compute such a solution Φ asaccurately as predicted by the relative perturbation theory.

Ren-Cang LiDept. of MathematicsUniversity of Texas at Arlington

[email protected]

Wei-guo WangSchool of Mathematical SciencesOcean University of [email protected]

Wei-chao WangDept. of MathematicsUniversity of Texas at [email protected]

Shufang XuSchool of Mathematical SciencesPeking [email protected]

Jungong XueSchool of Mathematical ScienceFudan [email protected]

Talk 3. When fluid becomes Brownian: the morphing ofRiccati into quadratic equationsRamaswami (2011) approximates Brownian motion using aMarkov-modulated linear fluid model. This is extended toapproximate Markov-modulated Brownian motion (MMBM); inparticular, the Laplace matrix exponent of a Markov-modulatedlinear fluid model converges to that of an MMBM. When theMMBM is reflected at zero, its stationary distribution is the limitof that of the fluid model also reflected at zero. Proof ofconvergence combines probabilistic arguments and linearalgebra. Starting from the Wiener-Hopf factorization of themodulating Markov chain, expressed as an algebraic Riccatiequation, key matrices in the limiting stationary distribution areshown to be solutions of a new quadratic equation.Giang T. NguyenDepartement d’InformatiqueUniversite Libre de [email protected]

Guy LatoucheDepartement d’InformatiqueUniversite Libre de [email protected]

V. RamaswamiAT&T Labs, [email protected]

Talk 4. Analyzing multi-type queues with general customerimpatience using Riccati equationsWe consider a class of multi-type queueing systems withcustomer impatience, where the (phase-type) service time and(general) patience distribution is type dependent and the types ofconsecutive customers may be correlated.To obtain the per-type waiting time distribution and probabilityof abandonment, we construct a fluid queue with r thresholds,the steady state of which can be expressed via the solution of 2ralgebraic Riccati equations using matrix-analytic methods.Numerical examples indicate that thousands thresholds may berequired to obtain accurate results, indicating the need for fastand numerically stable algorithms to solve large sets of Riccatiequations.Benny Van HoudtDept. of Mathematics and Computer ScienceUniversity of [email protected]

MS 74. Recent advances in the numerical solution oflarge scale matrix equations

Page 69: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

69 2012 SIAM Conference on Applied Linear Algebra

Talk 1. Hierarchical and multigrid methods for matrix andtensor equationsHierarchical and Multigrid methods are among the most efficientmethods for the solution of large-scale systems that stem, e.g.from the discretization of partial differential equations (PDE). Inthis talk we will review the generalization of these methods tothe solution of matrix equations (L. Grasedyck, W. Hackbusch, AMultigrid Method to Solve Large Scale Sylvester Equations,SIAM J. Matrix Anal. Appl. 29, pp. 870–894, 2007; L.Grasedyck, Nonlinear multigrid for the solution of large scaleRiccati equations in low-rank and H-matrix format,Num.Lin.Alg.Appl. 15, pp. 779–807, 2008), and equations thatpossess a tensor structure (L. Grasedyck, Hierarchical SingularValue Decomposition of Tensors, SIAM J. Matrix Anal. Appl. 31,pp. 2029–2054, 2010.). The standard hierarchical and multigridmethods can perfectly be combined with low rank (matrix) andlow tensor rank representations. The benefit is that the solutionis computable in almost optimal complexity with respect to theamount of data needed for the representation of the solution. Asan example we consider a PDE posed in a product domainΩ× Ω, Ω ⊂ Rd and discretized with Nd basis functions for thedomain Ω. Under separability assumptions on the right-handside the system is solved in low rank form in O(Nd) complexity(instead of O(N2d) required for the full solution). For a PDE onthe product domain Ω× · · · × Ω one can even solve the systemin low tensor rank form in O(N · d) complexity (instead ofO(N2d) required for the full solution). The state of the art willbe shortly summarized.Lars GrasedyckInst. for Geometry and Practical MathematicsRWTH [email protected]

Talk 2. A survey on Newton-ADI based solvers for large scaleAREsNewton based solvers for the algebraic Riccati equation (ARE)have been arround for several decades. Only roughly one decadeago these have become applicable to large and sparse AREswhen combined with the low rank Cholesky factor alternatingdirections implicit (LRCF-ADI) iteration for solving theLyapunov equations arising in every Newton step. In thiscontribution we give a survey on accelerated variants of the lowrank Cholesky factor Newton method (LRCF-NM) developedover the recent years. These include projection based methodsand inexact Newton-like approaches.Jens SaakComputational Methods in Systems and Control TheoryMax Planck Institute for Dynamics of Complex Technical Systemsand Department of MathematicsChemnitz University [email protected]

Talk 3. An invariant subspace method for large-scalealgebraic Riccati and Bernoulli equationsWe present a new family of low-rank approximations of thesolution of the algebraic Riccati equation based on stableinvariant subspaces of the Hamiltonian matrix ( L. Amodei andJ.-M. Buchot, An invariant subspace method for large-scalealgebraic Riccati equations, Appl. Numer. Math., 60 (11),1067-1082, 2010.). The approximations are given by afactorized form which preserves the positive semi-definiteness ofthe exact solution. By using the same general factorization, wededuce a reduced rank expression of the exact solution of the

Bernoulli equation. We illustrate the effectiveness of thisformulation by considering the stabilization of the nonstationaryincompressible Navier-Stokes equations by a boundary feedbackcontrol obtained from the solution of the Bernoulli equation (L.Amodei and J.-M. Buchot, A stabilization algorithm of theNavier-Stokes equations based on algebraic Bernoulli equation,Numer. Linear Algebra Appl., DOI: 10.1002/nla.799, 2011.).Luca AmodeiInstitut de Mathematiques de ToulouseUniversite Paul Sabatier118 route de Narbonne31062 Toulouse Cedex 9, [email protected]

Jean-Marie BuchotInstitut de Mathematiques de ToulouseUniversite Paul [email protected]

Talk 4. Delay Lyapunov equations and model order reductionof time delay systemsWe present a version of balanced truncation for model orderreduction of linear time-delay systems. The procedure is basedon a coordinate transformation of the position and preserves thedelay structure of the system. To every position we associatequantities representing energies for the controllability andobservability of the position. We show that these energies can beexpressed explicitly in terms of Gramians which are given assolutions to corresponding delay Lyapunov equations. Balancedtruncation can be based on these Gramians in the usual way.Tobias Damm

Dept. of MathematicsUniversity of [email protected]

Elias JarlebringDept. Num. Analysis,KTH, [email protected]

Wim MichielsDepartment of Computer ScienceK.U. [email protected]

MS 75. Points that minimize potential functionsTalk 1. Discretizing compact manifolds with minimal energyOne approach to generate “good” point configurations isinspired by physics: points emulate repelling unit point chargesinteracting through a Coulomb potential 1/r, where r measuresthe Euclidean distance between points in the ambient space.Points on the sphere that minimize the corresponding energy(potential energy) are uniformly distributed even if a Rieszs-potential 1/rs governs the point interaction.On other compact manifolds minimal energy systems areuniformly distributed if s is greater than the dimension d of themanifold (“Poppy Seed Bagle” theorem).This talk gives an introduction into the discrete minimal Rieszs-energy problem on compact manifolds.Johann S. BrauchartSchool of Mathematics and StatisticsUniversity of New South [email protected]

Talk 2. Well conditioned spherical designs and potentialfunctionsA spherical t-design is a set of N points on the unit sphere suchthat equal weighted numerical integration at these points is exact

Page 70: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 70

for all spherical polynomials of degree at most t. This talk willlook at the calculation and properties of spherical designs withN between t2/2 and (t+ 1)2, their characterization by differentpotential functions, and using any degrees of freedom to alsominimize the condition number of the basis matrix or maximizethe determinant of the associated Gram matrix.Robert S. WomersleySchool of Mathematics and StatisticsUniversity of New South [email protected]

Talk 3. Probabilistic frames in the 2-Wasserstein metricIn this talk I will review the notion of probabilistic frames andindicate how it is a natural generalization of frames. In addition,I will show how within the framework of the 2-Wassersteinmetric, probabilistic frames can be constructed and analyzed.This is based on joint work with M. Ehler.Kasso OkoudjouDept. of MathematicsUniversity of [email protected]

Talk 4. Numerical minimization of potential energies onspecific manifoldsIn this talk we consider the problem of computing localminimizers of potential energies of the form

E(x1, . . . ,xM ) :=

M∑i,j=1;i 6=j

K(xi,xj),

with xi ∈M, i = 1, . . . ,M, whereM⊂ Rn is ad-dimensional compact manifold and K :M×M→ R is agiven function. The optimization approach is based on anonlinear conjugate gradient method on Riemannian manifolds,which is a generalization of the CG-method in Euclidean space.This method was already successfully applied to thecomputation of spherical t-designs in [Numer. Math., 119:699 –724, 2011] and low discrepancy points for polynomial kernelsK, cf. [TU Chemnitz, Preprint 5, 2011]. Now, we presentinteresting numerical results for localized non-polynomialdiscrepancy kernels K.Manuel GrafDept. of MathematicsChemnitz University of [email protected]

Page 71: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

71 2012 SIAM Conference on Applied Linear Algebra

CP 1. Polinomial equations I

Talk 1. Solving multivariate vector polynomial interpolationproblemsThe aim of this talk is to present an algorithm for computing agenerating set for all multivariate polynomial vectors(g1(z), g2(z), · · · , gm(z))T that satisfy the followinghomogeneous interpolation conditions:pk1g1(ωk) + pk2g2(ωk) + · · ·+ pkmgm(ωk) = 0 for all1 ≤ k ≤ l, where ωk are multivariate interpolation points withcorresponding interpolation data pkj . Moreover, we areinterested in solutions having a specific degree structure. Thealgorithm will be constructed in such a way that it is easy toextract such solutions. At the same time we also look at differentways of ordering the monomials of multivariate polynomials, inorder to obtain more specific information about the degreestructure of our solution module. It will turn out that undercertain conditions the generating set of the solution moduleconstructed by the algorithm will form a Grobner basis.Clara MertensDept. of Computer [email protected]

Raf VandebrilDept. of Computer [email protected]

Marc Van BarelDept. of Computer [email protected]

Talk 2. A general condition number for polynomial evaluation

In this talk we present a new expression of the condition numberfor polynomial evaluation valid for any polynomial basisobtained from a linear recurrence. This expression extends theclassical one for the power and Bernstein bases, providing ageneral framework for all the families of orthogonalpolynomials. The use of this condition number permits to give ageneral theorem about the forward error in the evaluation offinite series in any of these polynomial bases by means of theextended Clenshaw algorithm. A running-error bound is alsopresented and all the bounds are compared in several numericalexamples.Sergio SerranoDept. Matematica Aplicada and IUMA,Universidad de Zaragoza, [email protected]

Roberto BarrioDept. Matematica Aplicada and IUMA,Universidad de Zaragoza, [email protected]

Hao JiangPEQUAN team, LIP6,Universite of Pierre et Marie [email protected]

Talk 3. The geometry of multivariate polynomial division andeliminationMultivariate polynomials are usually discussed in the frameworkof algebraic geometry. Solving problems in algebraic geometryusually involves the use of a Grobner basis. This talk will showthat linear algebra without any Grobner basis computationsuffices to solve basic problems from algebraic geometry by

describing three operations: multiplication, division andelimination. This linear algebra framework will also allow us togive a geometric interpretation. Division will involve obliqueprojections and a link between elimination and principal anglesbetween subspaces (CS decomposition) will be revealed. Themain computations in this framework are the QR and SingularValue Decomposition of sparse structured matrices.Kim BatselierDepartment of Electrical Engineering (ESAT), SCDKU Leuven, 3001 Leuven, [email protected]

Philippe DreesenDepartment of Electrical Engineering (ESAT), SCDKU Leuven, 3001 Leuven, [email protected]

Bart De MoorDepartment of Electrical Engineering (ESAT), SCDKU Leuven, 3001 Leuven, [email protected]

Talk 4. Characterization and construction of classicalorthogonal polynomials using a matrix approachWe identify a polynomial sequenceun(x) = an,0 + an,1x+ · · ·+ an,nx

n with the infinite lowertriangular matrix [an,k]. Using such matrices we obtain basicproperties of orthogonal sequences such as the representation ascharacteristic polynomials of tri-diagonal matrices and the3-term recurrence relation un+1(x) = (x− βn)un(x)−αnun−1(x). Then we characterize the classical orthogonalsequences as those that satisfy the equation (5v + w)α2 =(v+ 2w)(v2 + α1), where v = β1 − β0, w = β2 − β1, α1 > 0,α2 > 0, and also give certain pair of diagonals equal to zero in amatrix constructed from [an,k]. For each choice of theparameters v, w, α1, we find explicit expressions for all theαk, βk. In the case v = w = 0 we obtain a one-parameterfamily of classical orthogonal sequences that includes theChebyshev, Legendre, and Hermite sequences and also containsthe sequence of derivatives of each of its elements.Our matrix methods can be used to study other classes oforthogonal sequences.Luis Verde-StarDepartment of MathematicsUniversidad Autonoma Metropolitana, Mexico [email protected]

CP 2. Structured matrices I

Talk 1. Determinants and inverses of circulant matrices withJacobsthal and Jacobsthal-Lucas numbersLet Jn := circ(J1, J2, . . . , Jn) and nג := circ(j0, j1, . . . ,jn−1) be the n× n circulant matrices (n ≥ 3) whose elementsare Jacobsthal and Jacobsthal-Lucas numbers, respectively. Thedeterminants of Jn and nג are obtained in terms of Jn and jn,respectively. These imply that Jn and nג are invertible. We alsoderive the inverses of Jn and .nגDurmus BozkurtDepartment of MathematicsScience FacultySelcuk [email protected]

Tin-Yau TamDepartment of Mathematics and StatisticsAuburn [email protected]

Talk 2. Determinants and inverses of circulant matrices with

Page 72: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 72

Pell and Pell-Lucas numbersIn this talk, we define two n-square circulant matrices whoseelements are Pell and Pell-Lucas numbers in the following form

Pn =

P1 P2 · · · Pn−1 PnPn P1 · · · Pn−2 Pn−1

Pn−1 Pn · · · Pn−3 Pn−2

......

......

P2 P3 · · · Pn P1

and

Pn =

Q1 Q2 · · · Qn−1 QnQn Q1 · · · Qn−2 Qn−1

Qn−1 Qn · · · Qn−3 Qn−2

......

......

Q2 Q3 · · · Qn Q1

where Pn is the nth Pell number and Qn is the nth Pell-Lucasnumber. Then we compute determinants of the matrices. We alsoobtain formulas which give elements of inverse of the matrices.

Fatih YilmazDepartment of MathematicsScience FacultySelcuk [email protected]

Durmus BozkurtDepartment of MathematicsScience FacultySelcuk [email protected]

Talk 3. Eigenproblem for circulant and Hankel matrices inextremal algebraEigenvectors of circulant and Hankel matrices in fuzzy(max-min) algebra are studied. Both types of matrices aredetermined by vector of inputs in the first row. Investigation ofeigenvectors in max-min algebra is important for applicationsconnected with reliability of complex systems, with fuzzyrelations and further questions. Many real systems can berepresented by matrices of special form. Description of theeigenproblem for the above special types of matrices isimportant because for special types of matrices the computationcan often be performed in a simpler way than in the general case.Hana TomaskovaDept. of Information TechnologyUniversity of Hradec [email protected]

Martin GavalecDept. of Information TechnologyUniversity of Hradec [email protected]

Talk 4. Inverses of generalized Hessenberg matricesSome constructive methods are proposed for the inversion of ageneralized (upper) Hessenberg matrix (with subdiagonal rank≤ 1) A ∈ GL(n,K), see e.g. L. Elsner, Linear Algebra Appl.409 (2005) pp. 147-152, where a general structure theorem forsuch matrices was provided. They are based in its relatedHessenberg-like matrix B=A(2:n,1:n-1). If B is also a generatorrepresentable matrix, i.e. an,1 6= 0, the form A−1 = U−1+Y XT is easily obtained using the Sherman-Morrison-Woodburyformula. When B is strictly nonsingular, an inverse factorizationA−1 = HLHU , based on Hessenberg matrices, is provided.Concerns about the remaining general situation are also outlined.

Jesus Abderraman MarreroDept. of Mathematics Applied to Information Technologies(ETSIT - UPM) Technical University of Madrid, [email protected]

Mustapha RachidiDept. of Mathematics and InformaticsUniversity Moulay Ismail, Meknes, [email protected]

CP 3. Matrix factorization

Talk 1. Modified symplectic Gram-Schmidt process ismathematically and numerically equivalent to HouseholderSR algorithmIn this talk, we present two new and important results. The firstis that the SR factorization of a matrix A via the modifiedsymplectic Gram-Schmidt (MSGS) algorithm is mathematicallyequivalent to Householder SR algorithm applied to an embeddedmatrix obtained from A by adding two blocks of zeros in the topof the first half and in the top of the second half of the matrix A.The second result is that MSGS is also numerically equivalent toHouseholder SR algorithm applied the mentioned embeddedmatrix. The later algorithm is a Householder QR-like algorithm,based on some specified elementary symplectic transformationswhich are rank-one modification of the identity. Numericalexperiments will be given.Ahmed SalamLaboratory of Pure and Applied MathematicsUniversity Lille Nord de France, Calais, [email protected]

Talk 2. A multi-window approach to deflation in the QRalgorithmAggressive Early Deflation significantly improved theperformance of Francis’ QR algorithm by identifying deflationsin matrices ‘close to’ the Hessenberg iterate (see The MultishiftQR Algorithm. Part II. Aggressive Early Deflation. Braman,Byers and Mathias. SIAM J. Matrix Anal. Appl., 23(4):948-973,2002). The perturbations used in AED focused on a ‘deflationwindow’ which was a trailing principal submatrix. Recently, thisidea has been extended to investigate the effect of allowingsimultaneous perturbations in more than one location. In thistalk we present new results on this ’multi-window’ approach andits effect on the performance of the QR algorithm.Karen BramanDept. of Mathematics and Computer ScienceSouth Dakota School of Mines and [email protected]

Talk 3. Aggregation of the compact WY representationsgenerated by the TSQR algorithmThe TSQR (Tall-and-Skinny QR) algorithm is a parallelalgorithm for the Householder QR decomposition proposedrecently by Langou. Due to its large-grain parallelism, it canachieve high efficiency in both shared-memory anddistributed-memory parallel environments. In this talk, weconsider the situation where we first compute the QRdecomposition of A ∈ Rm×n (m n) by the TSQR algorithmand then compute QTB for another matrix B ∈ Rm×l. Wefurther assume that the first step is performed on a multicoreprocessor with p cores, while the latter step is performed by anaccelerator such as the GPU. In that case, the original TSQRalgorithm may not be optimal, since the Q factor generated bythe TSQR algorithm consists of many small Householdertransformations or compact WY representations of length m/p

Page 73: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

73 2012 SIAM Conference on Applied Linear Algebra

and 2n, and as a result, the vector length in the computation ofQTB is shortened. To solve the problem, we propose atechnique to aggregate the compact WY representationsgenerated by the TSQR algorithm into one large compact-WYlike representation. In this way, both the large-grain parallelismof the TSQR algorithm and the long vector length in thecomputation of QTB can be exploited. We show theeffectiveness of our technique in a hybrid multicore-GPUenvironment.Yusaku YamamotoDept. of Computational ScienceKobe [email protected]

Talk 4. A generalized SVD for collections of matricesThe generalized SVD for a pair of matrices is a simultaneousdiagonalization. Given matrices D1 (m1 × n) and D2

(m2 × n), it is possible to find orthogonal matrices U1 and U2

and a nonsingular X so that UT1 D1X = Σ1 and UT2 D2X = Σ2

are both diagonal. In our generalization, we are given matrices,D1, . . . , DN each of which has full column rank equal to n. Byworking implicitly (and carefully) with the eigensystem of thematrix

∑ij(D

Ti Di)(D

Tj Dj)

−1 we are able to simultaneously“take apart” the Di and discover common features. The newreduction reverts to the GSVD if N = 2.Charles Van LoanDept. of Computer ScienceCornell University, Ithaca, NY, [email protected]

O. AlterSCI Institute, Dept. Bioengineering, and Dept. Human GeneticsUniversity of Utah, Salt Lake City, UT, [email protected]

S.P. PonnapalliDept. of Electrical and Computer EngineeringUniversity of Texas, Austin TX, [email protected] M.A. SaundersDept. Management Science and EngineeringStanford University, Stanford, CA, [email protected]

CP 4. Krylov methods

Talk 1. Fixed-point Lanczos with analytical variable boundsWe consider the problem of establishing analytical bounds on allvariables calculated during the symmetric Lanczos process withthe objective of enabling fixed-point implementations with nooverflow. Current techniques fail to provide practical bounds fornonlinear recursive algorithms. We employ a diagonalpreconditioner to control the range of all variables, regardless ofthe condition number of the original matrix. Linear algebratechniques are used to prove the proposed bounds. It is shownthat the resulting fixed-point implementations can lead to similarnumerical behaviour as with double precision floating-pointwhile providing very significant performance improvements incustom hardware implementations.Juan L. JerezDept. of Electrical and Electronic EngineeringImperial College [email protected]

George A. ConstantinidesDept. of Electrical and Electronic EngineeringImperial College [email protected]

Eric C. KerriganDept. of Electrical and Electronic Engineering and Dept. of Aeronautics

Imperial College [email protected]

Talk 2. An Arnoldi-based method for model order reductionof delay systemFor large scale time-delay systems, Michiels, Jarlebring, andMeerbergen gave an efficient Arnoldi-based model orderreduction method. To reduce the order from n to k, their methodneeds nk2/2 memory. In this talk, we propose a newimplementation for the Arnoldi process, which is numericalstable and needs only nk memory.Yujie ZhangSchool of Mathematical SciencesFudan [email protected]

Yangfeng SuSchool of Mathematical SciencesFudan [email protected]

Talk 3. The Laurent-Arnoldi process, Laurent interpolation,and an application to the approximation of matrix functionsThe Laurent-Arnoldi process is an analog of the standardArnoldi process applied to the extended Krylov subspace. Itproduces an orthogonal basis for the subspace along with ageneralized Hessenberg matrix whose entries consist of therecursion coefficients. As in the standard case, the application ofthe process to certain types of linear operators results inrecursion formulas with few terms. One instance of this occurswhen the operator is isometric. In this case, the recursion matrixis the pentadiagonal CMV matrix and the Laurent-Arnoldiprocess essentially reduces to the isometric Arnoldi process inwhich the underlying measures differ only by a rotation in thecomplex plane. The other instance occurs when the operator isHermitian. This case produces an analog of the Lanczos processwhere, analogous to the CMV matrix, the recursion matrix ispentadiagonal. The Laurent polynomials generated by therecursion coefficients have properties similar to those of theLanczos polynomials. We discuss the interpolating properties ofthese polynomials in order to determine remainder terms forrational Gauss and Radau rules. We then apply our results to theapproximation of matrix functions and functionals.Carl JagelsDept. of MathematicsHanover CollegeHanover, [email protected]

Lothar ReichelDepartment of Mathematical SciencesKent State UniversityKent, [email protected]

Talk 4. On worst-case GMRESLet a nonsingular matrix A be given. By maximizing theGMRES residual norm at step k over all right hand sides fromthe unit sphere we get an approximation problem called theworst-case GMRES problem. In this contribution we concentrateon characterization of this problem. In particular, we will showthat worst-case starting vectors satisfy the so calledcross-equality and that they are always right singular vectors ofthe matrix pk(A) where pk is the corresponding worst-caseGMRES polynomial. While the ideal GMRES polynomial isalways unique, we will show that a worst-case GMRESpolynomial needs not be unique.

Page 74: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 74

Petr TichyInstitute of Computer ScienceCzech Academy of [email protected]

Vance [email protected]

Jorg LiesenInstitute of MathematicsTechnical University of [email protected]

CP 5. Control systems I

Talk 1. Structured perturbation of a controllable pairWe study the variation of the controllability indices of a pair(A,B) = (A, [B1 b]) ∈ Cn×n × Cn×(m1+1), where (A,B1)is controllable, when we make small additive perturbations onthe last column of B. Namely, we look for necessary conditionsthat must be satisfied by the controllability indices of(A, [B1 b′]) where b′ is sufficiently close to b.On the other hand, if ε is a sufficiently small real number, welook for (necessary and sufficient) conditions that must besatisfied by a partition in order to be the partition of thecontrollability indices (or, equivalently, those of Brunovsky) of(A, [B1 b′]) for some b′ such that ‖b′ − b‖ < ε. Theseproblems can be considered as perturbation problems as well ascompletion problems, since one part of the matrix remains fixed.Because of this, we talk about structured perturbation.Inmaculada de HoyosDept. de Matematica Aplicada y EIO,Facultad de Farmacia, Universidad del Paıs [email protected]

Itziar BaraganaDept. de Ciencias de la Computacion e IA,Facultad de Informatica, Universidad del Paıs [email protected]

M. Asuncion BeitiaDept. de Didactica de la Matematica y de las CCEE,Escuela de Magisterio, Universidad del Paıs [email protected]

Talk 2. Reduction to miniversal deformations of families ofbilinear systemsBilinear systems under similarity equivalence are considered.Using Arnold technique a versal deformation of a differentiablefamily of bilinear systems is derived from the tangent space andorthogonal bases for a normal space to the orbits of similarequivalent bilinear systems. Versal deformations provide aspecial parametrization of bilinear systems space, which can beapplied to perturbation analysis and investigation of complicatedobjects like singularities and bifurcations in multi-parameterbilinear systems.M. Isabel Garcıa-PlanasDept. de Matematica Aplicada IUniversitat Politecnica de [email protected]

Talk 3. Matrix stratifications in control applicationsIn this talk, we illustrate how the software tool StratiGraph canbe used to compute and visualize the closure hierarchy graphsassociated with different orbit and bundle stratifications. Thestratification provides the user with qualitative information of adynamical system like how the dynamics of the control problemand its system characteristics behave under perturbations.We investigate linearized models of mechanical systems which

can be represented by a linear time-invariant (LTI) system. Wealso analyze dynamical systems described by lineartime-invariant differential-algebraic sets of equations (DAEs),which often are expressed as descriptor models.Stefan JohanssonDepartment of Computing Science, Umea University, [email protected]

Andrii DmytryshynDepartment of Computing Science, Umea University, [email protected]

Pedher JohanssonDepartment of Computing Science, Umea University, [email protected]

Bo KagstromDepartment of Computing Science and HPC2N, Umea University,[email protected]

Talk 4. Stratification of structured pencils and related topicsIn this talk we present new results on stratifications (i.e.,constructing closure hierarchies) of structured pencil orbitsunder congruence transformations: O(A,B) = ST (A,B)Ss.t. detS 6= 0, and A,B are symmetric or skew symmetricmatrices. We use the canonical forms given by Thompson[Linear Algebra Appl. 147(1991), 323–371] as therepresentatives of the orbits. In particular, miniversaldeformations are derived and codimensions of the orbits arecomputed by solving associated systems of matrix equations(codimensions can also be calculated from the miniversaldeformations). One goal is to reduce the stratifications ofstructured pencils under congruence transformations to the wellstudied and solved problems for stratifications of matrix pencils.Andrii DmytryshynDept. of Computing ScienceUmea University, [email protected]

Stefan JohanssonDept. of Computing ScienceUmea University, [email protected]

Bo KagstromDept. of Computing Science and HPC2NUmea University, [email protected]

CP 6. Preconditioning I

Talk 1. Memory optimization to build a Schur complementOne promising algorithm for solving linear system is the hybridmethod based on domain decomposition and Schur complement(used by HIPS and MAPHYS for instance).In this method, a direct solver is used as a subroutine on eachsubdomain matrix; unfortunately, these calls are subject toserious memory overhead. With our improvements, the directsolver PASTIX can easily scale in terms of performances withseveral nodes composed of multicore chips and forthcomingGPU accelerators, and the memory peak due to Schurcomplement computation can be reduce by 10% to 30%.Astrid CasadeiINRIA Bordeaux, 351 cours de la Liberation, 33405 Talence [email protected]

Pierre RametUniversity Bordeaux, 351 cours de la Liberation, 33405 Talence [email protected]

Talk 2. On generalized inverses in solving two-by-two blocklinear systems

Page 75: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

75 2012 SIAM Conference on Applied Linear Algebra

The goal is to analyze a role of generalized inverses in theprojected Schur complement based algorithm for solvingnonsymmetric two-by-two block linear systems. The outer levelof the algorithm combines the Schur complement reduction withthe null-space method in order to treat the singularity of the(1,1)-block. The inner level uses a projected variant of theKrylov subspace method. We prove that the inner level isinvariant to the choice of a generalized inverse to the (1,1)-blockso that each generalized inverse is internally adapted to theMore-Penrose one. The algorithm extends ideas used in thebackground of the FETI domain decomposition methods.Numerical experiments confirm the theoretical results.Radek KuceraCentre of Excellence IT4IVSB-TU [email protected]

Tomas KozubekCentre of Excellence IT4IVSB-TU [email protected]

Alexandros MarkopoulosCentre of Excellence IT4IVSB-TU [email protected]

Talk 3. Sparse direct solver on top of large-scale multicoresystems with GPU acceleratorsIn numerical simulations, solving large sparse linear systems is acrucial and time-consuming step. Many parallel factorizationtechniques have been studied. In PASTIX solver, we developed adynamic scheduling for strongly hierarchical modernarchitectures. Recently, we evaluated how to replace thisscheduler by generic frameworks (DAGUE or STARPU) toexecute the factorization tasks graph. Since sparse direct solversare built with dense linear algebra kernels, we are implementingprototype versions of solvers on top of PLASMA andMAGMA libraries. We aim at designing algorithms and parallelprogramming models to implement direct methods onGPU-equipped computers.Xavier LacosteINRIA Bordeaux, 351 cours de la Liberation, 33405 Talence [email protected]

Pierre RametUniversity Bordeaux, 351 cours de la Liberation, 33405 Talence [email protected]

Talk 4. New block distributed Schur complementpreconditioners for CFD simulation on many-corearchitecturesAt the German Aerospace Center, the parallel simulationsystems TAU and TRACE have been developed for theaerodynamic design of aircrafts or turbines for jet engines. Forthe parallel iterative solution of large, sparse real or complexsystems of linear equations, required for both CFD solvers,block-local preconditioners are compared with reformulatedglobal block Distributed Schur Complement (DSC)preconditioning methods. Numerical, performance andscalability results of block DSC preconditioned FGMResalgorithms are presented for typical TAU and TRACE problemson many-core systems together with an analysis of the avantagesof using block compressed sparse matrix data formats.Achim BasermannSimulation and Software TechnologyGerman Aerospace Center (DLR)

[email protected]

Melven ZollnerSimulation and Software TechnologyGerman Aerospace Center (DLR)[email protected]

CP 7. Leasts squares

Talk 1. Partially linear modeling combining least squaressupport vector machines and sparse linear regressionIn this talk we propose an algorithm to efficiently solve apartially linear optimization problem in which the linear relationwill be sparse and the non-linear relation will be non-sparse. Thesparsity in the linear relation will be obtained by punishing thecomplexity of the corresponding weight vector similarly as inLASSO or group LASSO. The non-linear relation iskernel-based and its complexity is punished similarly as in LeastSquares Support Vector Machines (LS-SVM). To solve theoptimization problem we eliminate the non-linear relation usinga Singular Value Decomposition. The remaining optimizationproblem can be solved using existing solvers.Dries GeebelenDept. of Electrical Engeneering (ESAT), SCDKU Leuven, 3001 Leuven, [email protected]

Johan SuykensDept. of Electrical Engeneering (ESAT), SCDKU Leuven, 3001 Leuven, [email protected]

Joos VandewalleDept. of Electrical Engeneering (ESAT), SCDKU Leuven, 3001 Leuven, [email protected]

Talk 2. Construction of test instances with prescribedproperties for sparsity problemsFor benchmarking algorithms solving

min ‖x‖1 subject to Ax = b,

it is desirable to create test instances containing a matrix A, aright side b and a known solution x. To guarantee the existenceof a solution with prescribed sign pattern, the existence of a dualcertificate w satisfying ATw ∈ ∂‖x‖1 is necessary andsufficient. As used in the software package L1TestPack,alternating projections calculate a dual certificate with leastsquares on the complement of the support of x. In this talk, wepresent strategies to construct test instances with differentadditional properties such as a maximal support size or favorabledual certificate.Christian KruschelInstitute for Analysis and AlgebraTU [email protected]

Dirk LorenzInstitute for Analysis and AlgebraTU [email protected]

Talk 3. Weighted total least-squares collocation with geodeticapplicationsThe Total Least-Squares (TLS) approach to Errors-In-VariablesModels is well established, even in the case of correlation amongthe observations, and among the elements of the coefficientmatrix, as long as the two are not cross-correlated. Addingstochastic prior information transforms the fixed parametervector into a random effects vector to be predicted rather than

Page 76: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 76

estimated. Recently, Schaffrin found a TLS solution for thiscase, if the data were assumed iid. Here, this assumption isdropped, leading to a technique that is perhaps best calledWeighted Total-Least Squares Collocation. A fairly generalalgorithm will be presented along with an application.Kyle SnowTopcon Positioning Systems, [email protected]

Burkhard SchaffrinSchool of Earth SciencesThe Ohio State [email protected]

Talk 4. Polynomial regression in the Bernstein basisOne important problem in statistics consists of determining therelationship between a response variable and a single predictorvariable through a regression function. In this talk we considerthe problem of linear regression when the regression function isan algebraic polynomial of degree less than or equal to n. Thecoefficient matrix A of the overdetermined linear system to besolved in the least squares sense usually is an ill-conditionedmatrix, which leads to a loss of accuracy in the solution of thecorresponding normal equations.If the monomial basis is used for the space of polynomials A is arectangular Vandermonde matrix, while if the Bernstein basis isused then A is a rectangular Bernstein-Vandermonde matrix.Under certain conditions both classes of matrices are totallypositive, and this can advantageously be used in the constructionof algorithms based on the QR decomposition of A.In the talk, the advantage of using the Bernstein basis is to beshown.Jose-Javier MartınezDepartamento de MatematicasUniversidad de [email protected]

Ana MarcoDepartamento de MatematicasUniversidad de [email protected]

CP 8. Miscellaneous I

Talk 1. Reduced basis modeling for parametrized systems ofMaxwell’s equationsThe Reduced Basis Method generates low-order models toparametrized PDEs to allow for efficient evaluation ofparametrized models in many-query and real-time contexts.We apply the Reduced Basis Method to systems of Maxwell’sequations arising from electrical circuits. Using microstripmodels, the input-output behaviour of interconnect structures isapproximated with low order reduced basis models for aparametrized geometry, like distance between microstrips and/ormaterial coefficients, like permittivity and permeability ofsubstrates.We show the theoretical framework in which the Reduced BasisMethod is applied to Maxwell’s equations and present firstnumerical results.Martin HessComputational Methods in Systems and ControlMPI [email protected]

Peter BennerComputational Methods in Systems and ControlMPI [email protected]

Talk 2. A new alternative to the tensor product in waveletconstructionTensor product has been a predominant method in constructingmultivariate biorthogonal wavelet systems. An important featureof tensor product is to transform univariate refinement filters tomultivariate ones so that biorthogonality of the univariaterefinement filters is preserved. In this talk, we introduce analternative transformation, to which we refer as Coset Sum, oftensor product. In addition to preserving biorthogonality ofunivariate refinement filters, Coset Sum shares many otheressential features of tensor product that make it attractive inpractice. Furthermore, Coset Sum can even provide waveletsystems whose algorithms are faster than the ones based ontensor product.Youngmi HurDept. of Applied Mathematics and StatisticsThe Johns Hopkins [email protected]

Fang ZhengDept. of Applied Mathematics and StatisticsThe Johns Hopkins [email protected]

Talk 3. Purely algebraic domain decomposition methods forincompressible Navier-Stokes equationIn the context of domain decomposition methods, an algebraicapproximation of the transmission condition (TC) is proposed in“F. X. Roux, F. Magoules, L. Series. Y. Boubendir, Algebraicapproximations of Dirichlet-to-Neumann maps for the equationsof linear elasticity, Comput. Methods Appl. Mech. Engrg., 195,3742-3759, 2006”. For the case of non overlapping domains,approximations of the TCs are analogous to the approximationof the Schur complements (SC) in the incomplete blockfactorization. The basic idea is to approximate the SC by a smallSC approximations in patches. The computation of these localtransmissions are constructed independently, thus enhancing theparallelism in the overall approximation.In this work, a new computation of local Schur complement isproposed and the method is tested on steady state incompressibleNavier-Stokes problems discretized using finite element method.The earlier attempts used in the literature approximate the TC bybuilding small patches around each node. The method isgeneralized by aggregating the nodes and thus reducing theoverlapping computation of local TCs. Moreover, the approachof aggregating the nodes is based on the “numbering” of thenodes rather than on the “edge connectivity” between the nodesas previously done in the reference above.With the new aggregation scheme, the construction time issignificantly less. Furthermore, the new aggregation basedapproximation leads to a completely parallel solve phase. Thenew method is tested on the difficult cavity problem with highreynolds number on uniform and streatched grid. Theparallelism of the new method is also discussed.Pawan KumarDept. of Computer scienceK.U. [email protected]

Talk 4. On specific stability bounds for linear multiresolutionschemes based on biorthogonal waveletsSome Biorthogonal bases of compactly supported wavelets canbe considered as a cell-average prediction scheme withinHarten’s framework. In this paper we express the Biorthogonal

Page 77: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

77 2012 SIAM Conference on Applied Linear Algebra

prediction operator as a combination of some finite differences.Through a detailed analysis of certain contractivity properties,we arrive at specific stability bounds for the multiresolutiontransform. A variety of tests indicate that these bounds are closeto numerical estimates.J.C. TrilloDept. of Applied Mathematics and StatisticsU.P. [email protected]

Sergio AmatDept. of Applied Mathematics and StatisticsU.P. [email protected]

CP 9. Eigenvalue problems I

Talk 1. Incremental methods for computing extreme singularsubspacesThe oft-described family of low-rank incremental SVD methodsapproximate the truncated singular value decomposition of amatrix A via a single, efficient pass through the matrix. Thesemethods can be adapted to compute the singular tripletsassociated with either the largest or smallest singular values.Recent work identifies a relationship with an optimizingeigensolver over ATA and presents multi-pass iterations whichare provably convergent to the targeted singular triplets. We willdiscuss these results, and provide additional analysis, includingcircumstances under which the singular triplets are exactlycomputed via a single pass through the matrix.Christopher G. BakerComputational Engineering and Energy Sciences GroupOak Ridge National Laboratory, [email protected]

Kyle GallivanDept. of MathematicsFlorida State University, [email protected]

Paul Van DoorenDept. of Mathematical EngineeringCatholic University of Louvain, [email protected]

Talk 2. An efficient implementation of the shifted subspaceiteration method for sparse generalized eigenproblemsWe revisit the subspace iteration (SI) method for symmetricgeneralized eigenvalue problems in the context of improving anexisting solver in a commercial structural analysis package. Anew subspace algorithm is developed that increases theefficiency of the naıve SI by means of novel shifting techniquesand locking. Reliability of results is ensured using strongerconvergence criterion and various control parameters areexposed to the end user. The algorithm is implemented insoftware using the C++ library ‘Eigen’. Results are presentedand we end with an introduction to a new, communicationreducing method for sparse matrix-vector multiplication that weenvisage will increase efficiency further.Ramaseshan KannanSchool of MathematicsUniversity of Manchester, UKArup/Oasys Limited, [email protected]

Francoise TisseurSchool of MathematicsUniversity of Manchester, [email protected]

Nick Higham

School of MathematicsUniversity of Manchester, [email protected]

Talk 3. Recursive approximation of the dominant eigenspaceof an indefinite matrixWe consider here the problem of tracking the dominanteigenspace of an indefinite matrix by updating and downsizingrecursively a low rank approximation of the given matrix. Thetracking uses a window of the given matrix, which is is adaptedat every step of the algorithm. This increases the rank of theapproximation, and hence requires a rank reduction of theapproximation. In order to perform the window adaptation andthe rank reduction in an efficient manner, we make use of a newanti-triangular decomposition for indefinite matrices. All stepsof the algorithm only make use of orthogonal transformations,which guarantees the stability of the intermediate steps.Nicola MastronardiIstituto per le Applicazioni del Calcolo ”M. Picone”CNR, Sede di Bari, [email protected]

Paul Van DoorenDept. of Mathematical EngineeringICTEAM, Universite catholique de Louvain, [email protected]

Talk 4. Jacobi-Davidson type methods using a shift invarianceproperty of Krylov subspaces for eigenvalue problemsThe Jacobi-Davidson method is a subspace method for a feweigenpairs of a large sparse matrix. In the method, one has tosolve a nonlinear equation, the so-called correction equation, togenerate subspaces. The correction equation is approximatelysolved after a linearization that corresponds to replacing adesired eigenvalue with its approximation. In this talk, we focuson the nonlinear correction equation. Here, a Krylov subspace isgenerated, not only to compute approximate eigenvalues whichlead to a class of linearized correction equations, but also tosolve the linearized correction equations. Our approachreproduces the Jacobi-Davidson method and the Riccati method,and derives new efficient variants.Takafumi MiyataGraduate School of EngineeringNagoya [email protected]

Tomohiro SogabeGraduate School of Information Science & TechnologyAichi Prefectural [email protected]

Shao-Liang ZhangGraduate School of EngineeringNagoya [email protected]

CP 10. Miscellaneous II

Talk 1. Phylogenetic trees via latent semantic indexingIn this talk we discuss a technique for constructing phylogenetictrees from a set of whole genome sequences. The method doesnot use local sequence alignments but is instead based on latentsemantic indexing, which involves a reduction of dimension viathe singular value decomposition of a very largepolypeptide-by-genome frequency matrix. Distance measuresbetween species are then obtained. These are used to generate aphylogenetic tree relating the species under consideration.Jeffery J. LeaderDept. of Mathematics

Page 78: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 78

Rose-Hulman Institute of [email protected]

Talk 2. Synchronization of rotations via Riemanniantrust-regionsWe estimate unknown rotation matrices Ri ∈ SO(n = 2, 3)from a set of measurements of relative rotations RiR

Tj . Each

measurement is either slightly noisy, or an outlier bearing noinformation. We study the case where most measurements areoutliers. In (A. Singer, Angular Synchronization by Eigenvectorsand Semidefinite Programming, ACHA 30(1), pp. 20–36, 2011),an estimator is computed from a dominant subspace of a matrix.We observe this essentially results in least-squares estimation,and propose instead a Maximum Likelihood Estimator, explicitlyacknowledging outliers. We compute the MLE via trust-regionoptimization on a matrix manifold. Comparison of our estimatorwith Riemannian Cramer-Rao bounds suggests efficiency.Nicolas BoumalDepartment of Mathematical Engineering, ICTEAMUniversite catholique de [email protected]

Amit SingerDepartment of Mathematics and PACMPrinceton [email protected]

Pierre-Antoine AbsilDepartment of Mathematical Engineering, ICTEAMUniversite catholique de [email protected]

CP 11. Miscellaneous III

Talk 1. A new multi-way array decompositionThis talk draws a new perspective on multilinear algebra and themulti-way array decomposition. A novel decompositiontechnique based on Enhanced Multivariance ProductRepresentation (EMPR) is introduced. EMPR is a derivative ofHigh Dimensional Model Representation (HDMR), which is adivide-and-conquer algorithm. The proposed technique providesa decomposition by rewriting multi-way arrays in a form thatconsists of outer products of certain support vectors. Eachsupport vector corresponds to a different subspace of the originalmulti-way array. Such an approach can improve the semanticmeaning of the decomposition by eliminating rankconsiderations. Compression of animations is performed as aninitial experimental evaluation.Evrim Korkmaz OzayInformatics InstituteIstanbul Technical University, [email protected]

Metin DemiralpInformatics InstituteIstanbul Technical University, [email protected]

Talk 2. Towards more reliable performances of accuratefloating-point summation algorithmsSeveral accurate algorithms to sum IEEE-754 floating pointnumbers have been recently published, e.g. Rump-Ogita-Oishi(2008, 2009), Zhu-Hayes (2009, 2010). Since some of theseactually compute the correct rounding of the exact sum, run-timeand memory performances become the discriminant property todecide which one to choose. In this talk we focus the difficultproblem of presenting reliable measures of the run-timeperformances of such core algorithms. We present an almostmachine independent analysis based on the instruction-level

parallelism of the algorithm. Our PerPI software toolautomatizes this analysis and provides a more reliableperformance analysis.Philippe LangloisDALI - LIRMMUniversity of Perpignan Via [email protected]

CP 12. Matrix norms

Talk 1. Numerical solutions of singular linear matrixdifferential equationsThe main objective of this talk is to provide a procedure fordiscretizing an initial value problem of a class of linear matrixdifferetial equations whose coefficients are square constantmatrices and the matrix coefficient of the highest-orderderivative is degenerate. By using matrix pencil theory, first wegive necessary and sufficient conditions to obtain a uniquesolution for the continuous time model. After by assuming thatthe input vector changes only at equally space sampling instantswe shall derive the corresponding discrete time state equationwhich yield the values of the solutions of the continuous timemodel.Ioannis K. DassiosDepartment of MathematicsUniversity of Athens, [email protected]

Talk 2. Matrix version of Bohr’s inequalityThe classical Bohr’s inequality states that for any z, w ∈ C and

for p, q > 1 with1

p+

1

q= 1,

|z + w|2 ≤ p|z|2 + q|w|2

with equality if and only if w = (p− 1)z. Several operatorgeneralizations of the Bohr inequality have been obtained bysome authors. Vasic and Keckic, (Math. Balkanica, 1(1971)282-286), gave an interesting generalization of the inequality ofthe following form∣∣∣∣∣

m∑j=1

zj

∣∣∣∣∣r

(m∑j=1

p1

1−r

j

)r−1 m∑j=1

pj |zj |r,

where zj ∈ C, pj > 0, r > 1.In this talk we aim to give weak majorization inequalities formatrices and apply it to prove eigenvalue extension of the resultby Vasic and Keckic and unitarily norm extensions.Jagjit SinghDepartment of MathematicsBebe Nanaki University College, Mithra, Kapurthla, Punjab, [email protected]

CP 13. Code theory

Talk 1. Linear codes in LRTJ spacesIn [S. Jain, Array Codes in theGeneralized-Lee-RT-Pseudo-Metric (the GLRTP-Metric), toappear in Algebra Colloquium.], Jain introduced a new metricviz. LRTJ-metric on the space Matm×s(Zq), the module spaceof all m× s matrices with entries from the finite ring Zq(q ≥ 2)generalizing the classical one dimensional Lee metric [C. Y. Lee,Some properties of non-binary error correcting codes, IEEETrans. Information Theory, IT-4 (1958), 77-82] and thetwo-dimensional RT-metric [M.Yu. Rosenbloom and M.A.

Page 79: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

79 2012 SIAM Conference on Applied Linear Algebra

Tsfasman, Codes for m-metric, Problems of InformationTransmission, 33 (1997), 45-52] which further appeared in [E.Deza and M.M. Deza, Encyclopedia of Distances, Elsevier,2008, p.270]. In this talk, we discuss error control techniquesviz. error detection and error correction in linear codes equippedwith LRTJ-metric in communication channels[S. Jain, ArrayCodes in the Generalized-Lee-RT-Pseudo-Metric (theGLRTP-Metric), to appear in Algebra Colloquium]. We furtherdiscuss various properties of the dual code of an LRTJ code andobtain the relation for the complete weight enumerator of thedual code of an array code in LRTJ spaces in the form ofMacWilliams duality relations[S. Jain, MacWilliams Duality inLRTJ-Spaces, to appear in Ars Combinatoria].Sapna JainDept. of MathematicsUniversity of Delhi, [email protected]

Talk 2. On turbo codes of rate 1/n from linear systems pointof viewTurbo codes were introduced by Berrou, Glavieux andThitimajshima in 1993. Their idea of using parallelconcatenation of recursive systematic convolutional codes withan interleaver was a major step in terms of achieving low biterror rates at signal to noise ratios near the Shannon limit. Oneof the most important parameter of turbo codes is the effectivefree distance, introduced by Berrou and Montorsi. It plays a rolesimilar to that of the free distance for convolutional codes.Campillo, Devesa, Herranz and Perea developed turbo codes inthe framework of the input-state-output representation forconvolutional codes. In this talk, using this representation, wepresent conditions for a turbo code with rate 1/n in order toachieve maximum effective free distance.Victoria HerranzInstitute Center of Operations ResearchDept. of Statistics, Mathematics and ComputerUniversity Miguel Hernandez de [email protected]

Antonio DevesaDept. of Statistics, Mathematics and ComputerUniversity Miguel Hernandez de [email protected]

Carmen PereaInstitute Center of Operations ResearchDept. of Statistics, Mathematics and ComputerUniversity Miguel Hernandez de [email protected]

Veronica RequenaDept. of Statistics, Mathematics and ComputerUniversity Miguel Hernandez de [email protected]

CP 14. Iterative methods II

Talk 1. Meshless method for steady Burgers’ equation: amatrix equation approachIn this talk we present some numerical linear algebra methods tosolve a Burgers equations. A meshless method based on thinplate splines is applied to a non homogeneous steady Burgers’equation with Dirichlet boundary condition. The numericalapproximation of the solution leads to a large-scale nonlinearmatrix equation. In order to implement the inexact Newtonalgorithm to solve this equation, we focus ourselves on thejacobian matrix related to this method and establish someinteresting matrix relations. The obtained linear matrix equation

will be solved using a global GMRES method. Numericalexamples will be given to illustrate our method.Mustapha HachedL.M.P.AUniv. Lille- Nord de France, ULCO, 50 rue F. Buisson BP699, F-62228Calais Cedex, [email protected]

Abderrahman BouhamidiL.M.P.AUniv. Lille- Nord de France, ULCO, 50 rue F. Buisson BP699, F-62228Calais Cedex, [email protected]

Khalide JbilouL.M.P.AUniv. Lille- Nord de France, ULCO, 50 rue F. Buisson BP699, F-62228Calais Cedex, [email protected]

Talk 2. Tuned preconditioners for inexact two-sided inverseand Rayleigh quotient iterationWe consider two-sided inner-outer iterative methods based oninverse and Rayleigh quotient iteration for the numericalsolution of non-normal eigenvalue problems. There, in eachouter iterations two adjoint linear systems have to be solved, butit is often sufficient to solve these systems inexactly and stillpreserve the fast convergence of the original exact algorithm.This can, e.g., be accomplished by applying a limited number ofsteps of a Krylov subspace method for linear systems. To reducethe number of these inner iterations, preconditioners are usuallyapplied. In the one-sided case it is even possible to keep thenumber of inner iterations almost constant during the course ofthe inner-outer method by applying so called tunedpreconditioners. We investigate how these ideas can be carriedover to the two-sided case. A special interest there is thesimultaneous solution of the occurring adjoint linear systemsusing methods based on the two-sided Lanczos process, e.g.,BiCG and QMR.Patrick KurschnerComputational Methods in Systems and Control TheoryMax-Planck Institute [email protected]

Melina FreitagDepartment of Mathematical SciencesUniversity of [email protected]

CP 15. Polynomial equations II

Talk 1. Standard triples of structured matrix polynomialsThe notion of standard triples plays a central role in the theory ofmatrix polynomials. We study such triples for matrixpolynomials P (λ) with structure S, where S is the Hermitian,symmetric, adj-even, adj-odd, adj-palindromic oradj-antipalindromic structure (with adj = ∗, T ). We introducethe notion of S-structured standard triple. With the exception ofT -(anti)palindromic matrix polynomials of even degree withboth −1 and 1 as eigenvalues, we show that P (λ) has structureS if and only if P (λ) admits an S-structured standard triple, andmoreover that every standard triple of a matrix polynomial withstructure S is S-structured. We investigate the important specialcase of S-structured Jordan triples.Maha Al-AmmariDept. of MathematicsKing Saud [email protected]

Francoise Tisseur

Page 80: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 80

Dept. of MathematicsThe University of [email protected]

Talk 2. Solving systems of polynomial equations using(numerical) linear algebraSolving multivariate polynomial equations is central inoptimization theory, systems and control theory, statistics, andseveral other sciences. The task is phrased as a (numerical)linear algebra problem involving a large sparse matrixcontaining the coefficients of the polynomials. Two approachesto retrieve all solutions are discussed: In the nullspace-basedalgorithm an eigenvalue problem is obtained by applyingrealization theory to the nullspace of the coefficient matrix.Secondly in the data-driven approach one operates directly onthe coefficient matrix and solves linear equations using theQR-decomposition yielding an eigenvalue problem in thetriangular factor R. Ideas are developed on the levels ofgeometry (e.g., column/row spaces, orthogonality), numericallinear algebra (e.g., Gram-Schmidt orthogonalization, ranks) andalgorithms (e.g., sparse nullspace computations, power method).Philippe DreesenDept. Electrical Engineering, ESAT-SCD – IBBT Future HealthDepartment; KU [email protected]

Kim BatselierDept. Electrical Engineering, ESAT-SCD – IBBT Future HealthDepartment; KU [email protected]

Bart De MoorDept. Electrical Engineering, ESAT-SCD – IBBT Future HealthDepartment; KU [email protected]

CP 16. Matrices and algebraic structures

Talk 1. Determinantal range and Frobenius endomorphismsFor A,C ∈Mn the setWC(A) = Tr (AUCU∗) : UU∗ = In is the C-numericalrange of A and it reduces to the classical numerical range, whenC is a rank one Hermitian orthogonal projection. A variation ofWC(A) is the C-determinantal range of A, that is,4C(A) = det(A− UCU∗) : UU∗ = In. We present someproperties of this set and characterize the additive Frobeniusendomorphisms for the determinantal range on the whole matrixalgebra Mn and on the set of Hermitian matrices Hn.Rute LemosCIDMA, Dept. of MathematicsUniversity of Aveiro, [email protected]

Alexander GutermanDept. of Mathematics and MechanicsMoscow State University, [email protected]

Graca SoaresCMUC-UTAD, Dept. of MathematicsUniversity of Tras-os-Montes and Alto Douro, [email protected]

Talk 2. On algorithms for constructing (0, 1)-matrices withprescribed row and column sum vectorsGiven partitions R and S with the same weight and S R∗ letA(R,S) be the class of the (0, 1)-matrices with row sum R andcolumn sum S. These matrices play an active role in modernmathematics and the applications extend from their naturalcontext (Matrix Theory, Combinatorics or Graph Theory) tomany other areas of knowledge as Biology or Chemistry. The

Robinson-Schensted-Knuth correspondence establishes abijection between the classA(R,S) and pairs of Young tableauxwith conjugate shape λ and λ∗ with S λ R∗. We give acanonical construction for matrices in A(R,S) whose insertiontableau has a prescribed shape λ, with S λ R∗. Thisalgorithm generalizes some recent constructions due to R.Brualdi for the extremal cases λ = S and λ = R∗.Henrique F. da CruzDepartamento de MatematicaUniversidade da Beira Interior, Covilha, [email protected]

Rosario FernandesDepartamento de MatematicaUniversidade Nova de Lisboa, Caparica, [email protected]

Talk 3. Elementary matrices arising from unimodular rowsLet (R,m) be a commutative local ring with maximal ideal m.Let A = R[X] be a R-algebra and f be a monic polynomial ofR[X]. Assume that

1. A/fA is finitely generated R-module.

2. R(X) contains a subalgebra B such thatR(X) = R[X] +B and mB ⊂ J(B) (the Jacobsonradical of B).

3. SLr(K(X)) = Er(K(X)) for every r ≥ 1, whereK = R/m.

4. SLn(R[X]) ∩ En(R(X)) = En(R[X]).

Let F = (f1, f2, . . . , fn) be a unimodular row in R[X] whichcan be completed to a matrix C1 belonging to En(R(X)) and amatrix C2 belonging to En(K[X]). Then F can also becompleted to a matrix belonging to En(R[X]).Ratnesh Kumar MishraDept. of MathematicsMotilal Nehru National Institute of Technology, Allahabad-211004,[email protected]

Shiv Datt KumarDept. of MathematicsMotilal Nehru National Institute of Technology, Allahabad-211004,[email protected]

Raje SridharanSchool of Mathematics,Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba,Mumbai-400 005, [email protected]

Talk 4. Nonsingular ACI-matrices over integral domainsAn ACI-matrix is a matrix whose entries are polynomials ofdegree at most one in a number of indeterminates where noindeterminate appears in two different columns. Consider thenext two problems: (a) characterize the ACI-matrices of order nall of whose completions have the same nonzero constantdeterminant; (b) characterize the ACI-matrices of order n all ofwhose completions are nonsingular. In 2010 Brualdi, Huang andZhan solved both problems for fields of at least n+ 1 elements.We extend their result on problem (a) to integral domains, andextend their result on problem (b) to arbitrary fields.Alberto BorobiaDept. of Mathematics,Universidad Nacional de Educacion a Distancia, [email protected]

Roberto Canogar

Page 81: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

81 2012 SIAM Conference on Applied Linear Algebra

Dept. of Mathematics,Universidad Nacional de Educacion a Distancia, [email protected]

CP 17. Lyapunov equations

Talk 1. Lyapunov matrix inequalities with solutions sharing acommon Schur complementThe square matrices A1, A2, · · · , AN are said to be stable withrespect of the positive definite matrices P1, · · · , PN if theLyapunov inequalities ATi Pi + PiAi < 0, with i = 1, · · · , N ,are satisfied. In this case, the matrices Pi are called Lyapunovsolutions for the matrices Ai, with i = 1, · · · , N , respectively.In this work, we investigate when the Lyapunov solutions Pishare the same Schur complement of certain order. Theexistence problem of a set of Lyapunov solutions sharing acommon Schur complement arises, for instance, in thestabilization of a switched system, under arbitrary switchingsignal for which discontinuous jumps on some of the statecomponents are allowed, during the switching instants.Ana Catarina CarapitoDept. of MathematicsUniversity of Beira Interior, [email protected]

Isabel BrasDept. of MathematicsUniversity of Aveiro, [email protected]

Paula RochaDept. of Electrical and Computer EngineeringUniversity of Porto, [email protected]

Talk 2. Solving large scale projected periodic Lyapunovequations using structure-exploting methodsSimulation and analysis of periodic systems can demand to solvethe projected periodic Lyapunov equations associated with thoseperiodic systems, if the periodic systems are in descriptor forms.In this talk, we discuss the iterative solution of large scale sparseprojected periodic discrete-time algebraic Lyapunov equationswhich arise in periodic state feedback problems and in modelreduction of periodic descriptor systems. We extend the Smithmethod to solve the large scale projected periodic discrete-timealgebraic Lyapunov equations in lifted form. The block diagonalstructure of the periodic solution is preserved in every iterationsteps. A low-rank version of this method is also presented,which can be used to compute low-rank approximations to thesolutions of projected periodic discrete-time algebraic Lyapunovequations. Numerical results are given to illustrate the efficiencyand accuracy of the proposed method.Mohammad-Sahadet HossainMax Planck Institute for Dynamics of Complex Technical Systems,Magdeburg, [email protected]

Peter BennerChemnitz University of Technology, Chemnitz, Germany, andMax Planck Institute for Dynamics of Complex Technical Systems,Magdeburg, [email protected]

Talk 3. A new minimal residual method for large scaleLyapunov equationsThe solution of large scale algebraic Lyapunov equations isimportant in the stability analysis of linear dynamical systems.We present a projection-based residual minimizing procedure forsolving the Lyapunov equation.As opposed to earlier methods

(e.g.,[I.M. Jiamoukha and E.M. Kasenally, SIAM J.Numer.Anal., 1994]), our algorithm relies on an inner iterativesolver, accompanied with a selection of preconditioningtechniques that effectively exploit the structure of the problem.The residual minimization allows us to relax the coefficientmatrix passivity constraint, which is sometimes hard to meet inreal application problems. Numerical experiments with standardbenchmark problems will be reported.Yiding LinSchool of Mathematical Sciences, Xiamen University, Xiamen, ChinaDipartimento di Matematica, Universita di Bologna, Bologna, [email protected]

Valeria SimonciniDipartimento di Matematica, Universita di Bologna, Bologna, [email protected]

Talk 4. Contributions to the analysis of the extended Krylovsubspace method (EKSM) for Lyapunov matrix equationsThe EKSM is an iterative method which can be used toapproximate the solution of Lyapunov matrix equations andextract reduced order models of linear time invariant systems(LTIs). We explain why any positive residual curve is possibleand we show how to construct LTIs for which theapproximations of the Gramians returned by the EKSM areorthogonal in exact arithmetic. Finally, we build an electricalcircuit for which the Gramians P and Q satisfyfl(P fl(Qx)) = 0, where fl(x) denotes the floating pointrepresentation of x.Carl Christian K. MikkelsenDepartment of Computer Science and HPC2NUmea [email protected]

CP 18. Eigenvalue problems II

Talk 1. Differentials of eigenvalues and eigenvectors undernonstandard normalizations with applicationsWe propose a new approach to the identification of the change ineigenvalues and eigenvectors as a consequence of a perturbationapplied to the eigenproblem. The approach has three differenceswith respect to most previous literature that allow for coveringseveral intricate normalizations previously adopted and simplifythe proofs. First, we start from the differentials, and not from thederivatives. Second, in our more general result, we explicitlyconsider two normalization functions, one for the right and onefor the left eigenvector. Third, using complex differentials, weexplicitly allow for nonanalytic normalization functions. Severalapplications are discussed in detail.Raffaello SeriDept. of EconomicsUniversity of [email protected]

Christine ChoiratDept. of EconomicsUniversity of [email protected]

Michele BernasconiDept. of EconomicsUniversity of [email protected]

Talk 2. A solution to the inverse eigenvalue problem forcertain singular Hermitian matricesWe present the solution to Inverse Eigenvalue problem of certainsingular Hermitian matrices by developing a method in thecontext of consistency conditions, for solving the direct

Page 82: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 82

Eigenvalue problem for singular matrices. Based on this method,we propose an algorithm to reconstruct such matrices from theireigenvalues. That is, we develop algorithms and prove that theysolve special 2× 2, 3× 3 and 4× 4 singular symmetricmatrices. In each case, the number of independent matrixelements would reduce to the extent that there would anisomorphism between the seed elements and the eigenvalues.We also initiate a differential geometry and, a numerical analyticinterpretation of the Inverse Eigenvalue problem using fiberbundle with structure group O(n). A simple and more practicablealgorithm based on the Newtons iterative method would bedeveloped to construct symmetric matrices using singularsymmetric matrices as the initial matrices.Kwasi Baah GyamfiDept. of MathematicsKwame Nkrumah University of Science and Technology, Kumasi, [email protected]

Anthony AidooDept. of Mathematics and Computer ScienceEastern Connecticut State University, Willimantic, [email protected]

Talk 3. Divide and conquer the CS decompositionThe CS decomposition factors a unitary matrix into simplercomponents, revealing the principal angles between certainlinear subspaces. We present a divide-and-conquer algorithm forthe CS decomposition and the closely related generalizedsingular value decomposition. The method is inspired by thework of Gu and Eisenstat and others. Novel representationsenforce orthogonality.Brian D. SuttonDept. of MathematicsRandolph-Macon [email protected]

Talk 4. The optimal perturbation bounds of theMoore-Penrose inverse under the Frobenius normWe obtain the optimal perturbation bounds of the Moore-Penroseinverse under the Frobenius norm by using Singular ValueDecomposition, which improved the results in the earlier paper[P.-A.Wedin, Perturbation theory for pseudo-inverses, BIT 13(1973) 217-232]. In addition, a perturbation bound of theMoore-Penrose inverse under the Frobenius norm in the case ofthe multiplicative perturbation model is also given.Zheng BingSchool of Mathematics and StaticsticsLanzhou [email protected]

Meng LingshengSchool of Mathematics and StaticsticsLanzhou [email protected]

CP 19. Positivity I

Talk 1. Positivity preserving simulation ofdifferential-algebraic equationsWe discuss the discretization of differential-algebraic equationswith the property of positivity, as they arise e.g. in chemicalreaction kinetics or process engineering. For index-1 problems,in which the differential and algebraic equations are explicitlygiven, we present conditions for a positive numericalapproximation that also meets the algebraic constraints. Theseresults are extended to higher index problems, i.e., problems inwhich some algebraic equations are hidden in the system, by a

positivity preserving index reduction. This remodelingprocedure filters out the hidden constraints without destroyingthe positivity property and admits to trace back these systems tothe index-1 case.Ann-Kristin BaumInstitut fur MathematikTechniche Universitat [email protected]

Talk 2. Computing the exponentials of essentially nonnegativematrices with high relative accuracyIn this talk we present an entry-wise forward stable algorithm forcomputing the exponentials of essentially nonnegative matrices(Metzler matrices). The algorithm is a scaling-and-squaringapproach built on Taylor expansion and non-diagonal Padeapproximation. As an important feature, we also provide anentry-wise error estimate to the user. Both rounding erroranalysis and numerical experiments demonstrate the stability ofthe new algorithm.Meiyue ShaoDepartment of Computing ScienceUmea [email protected]

Weiguo GaoSchool of Mathematical SciencesFudan [email protected]

Jungong XueSchool of Mathematical SciencesFudan [email protected]

Talk 3. Sparse and unique nonnegative matrix factorizationthrough data preprocessingNonnegative matrix factorization (NMF) has become a verypopular technique in data mining because it is able to extractmeaningful features through a sparse and part-basedrepresentation. However, NMF has the drawback of being highlyill-posed and there typically exist many different but equivalentfactorizations. In this talk, we introduce a preprocessingtechnique leading to more well-posed NMF problems whosesolutions are sparser. It is based on the theory of M-matrices andthe geometric interpretation of NMF, and requires to solvingconstrained linear least squares problems. We theoretically andexperimentally demonstrate the effectiveness of our approach.Nicolas GillisDepartment of Combinatorics and OptimizationUniversity of [email protected]

Talk 4. Iterative regularized solution of linearcomplementarity problemsFor Linear Complementarity Problems (LCP) with a positivesemidefinite matrix M , iterative solvers can be derived by aprocess of regularization. In [R. W. Cottle et al., The LCP,Academic Press, 1992] the initial LCP is replaced by a sequenceof positive definite ones, with the matrices M + αI . Here weanalyse a generalization of this method where the identity I isreplaced by a positive definite diagonal matrix D. We prove thatthe sequence of approximations so defined converges to theminimal D-norm solution of the initial LCP. This extensionopens the possibility for interesting applications in the field ofrigid multibody dynamics.Constantin PopaFaculty of Mathematics and Informatics

Page 83: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

83 2012 SIAM Conference on Applied Linear Algebra

“Ovidius” University of Constanta, [email protected]

Tobias PreclikChair of Computer Science 10 (System Simulation)Friedrich-Alexander-Universitat Erlangen-Nurnberg, [email protected]

Ulrich RudeChair of Computer Science 10 (System Simulation)Friedrich-Alexander-Universitat Erlangen-Nurnberg, [email protected]

CP 20. Control systems II

Talk 1. Disturbance decoupling problem for singular switchedlinear systemsIn this paper, a geometric approach to disturbance decouplingproblem for switched singular linear systems is made. Aswitched singular linear system with disturbance is a systemwhich consists of several linear subsystems with disturbance anda piecewise constant map σ taking values into the index setM = 1, . . . , ` which indexes the different subsystems. In thecontinuous case, such a system can be mathematically describedby Eσx(t) = Aσx(t) +Bσu(t) +Gσg(t) whereEσ, Aσ ∈Mn(R), Bσ ∈Mn×m(R), Gσ ∈Mn×p(C) andx = dx/dt. The term g(t), t ≥ 0, may represent modeling ormeasuring errors, noise, or higher order terms in linearization. Inthe case of standard state space systems the disturbancedecoupling problem has been largely studied. The problem ofconstructing feedbacks that suppress this disturbance in thesense that g(t) does not affect the input-output behaviour of theswitched singular linear system with disturbance is analyzed anda solvability condition for the problem is obtained usinginvariant subspaces for singular switched linear systems.M. Dolors MagretDepartament de Matematica Aplicada IUniversitat Politecnica de [email protected]

M. Isabel Garcıa-PlanasDepartament de Matematica Aplicada IUniversitat Politecnica de [email protected]

Talk 2. Invariant subspaces of switched linear systemsInvariant subspaces under a linear transformation were definedby von Neumann (1935) and generalized to linear dynamicalsystems: G. Basile and G. Marro (1969) defined the concept ofcontrolled and conditioned invariant subspaces for control linearsystems, I.M. Buzurovic (2000) studied (controlled andconditioned) invariant subspaces for singular linear systems.They are connected to a great number of disciplines, for examplein the geometric study of control theory of linear time-invariantdynamical systems (in particular, controllability andobservability ). This notion was extended also to linearparameter-varying systems by introducing the concept ofparameter-varying invariant subspaces. Recently, several authorssuch as A.A. Julius, A.J. van der Schaft, E. Yurtseven, W.P.M.H.Heemels, M.K. Camlibel, among others, introduced invariantsubspaces for switched linear systems (those containing anytrajectory which originates on it) and applied them to differenttopics! , disturbance decoupling and observer design problems,for example. In this work we study properties and algorithms todetermine the set of all invariant subspaces for a given switchedlinear system using linear algebra tools, providing acomputational alternative to theoretical results.M. Eulalia Montoro

Departament de Matematica Aplicada IUniversitat Politecnica de [email protected]

M. Dolors MagretDepartament de Matematica Aplicada IUniversitat Politecnica de [email protected]

David MinguezaDepartament de Matematica Aplicada IUniversitat Politecnica de [email protected]

Talk 3. On the pole placement problem for singular systemsLet us consider a singular system with outputsEx = Ax+Bu, y = Cx, withE,A ∈ Fh×n, B ∈ Fh×m, C ∈ F p×n. For r regular systems(E non singular), the pole assignment problem by statefeedback, and by state feedback and output injection was solveda few decades ago. Recently, the pole assignment problem bystate feedback has been solved for singular systems (E singular).We try to extend the solution to the pole assignment problem tosingular systems by state feedback and output injection: Given amonic homogeneous polynomial f ∈ F [x, y], we study theexistence of a state feedback matrix F and an output injection Ksuch that the state matrix sE − (A+BF +KC) has f ascharacteristic polynomial, under a regularizability condition onthe system.Alicia RocaDpto. de Matematica AplicadaUniversidad Politecnica de [email protected]

Talk 4. Coordination control of linear systemsCoordinated linear systems are particular hierarchical systems,characterized by conditional independence and invarianceproperties of the underlying linear spaces. Any distributed linearsystem can be transformed into a coordinated linear system,using observability decompositions. The motivation behindstudying these systems is that some global control objectives canbe achieved via local controllers: E.g., global stabilizabilityreduces to local stabilizability of all subsystems. Thecorresponding LQ problem separates into independent LQproblems on the lower level, and a more involved controlproblem on the higher level. For the latter problem, possibleapproaches include using subsystem observers, numericaloptimization, and event-based feedback.Pia L. KempkerDept. of Mathematics, FEWVrije Universiteit [email protected]

Andre C.M. RanDept. of Mathematics, FEWVrije Universiteit [email protected]

Jan H. van SchuppenCentrum voor Wiskunde en Informatica (CWI)Amsterdam, The [email protected]

CP 21. Matrix pencils

Talk 1. Looking at the complexity index as a matrix measureIn this paper we discuss a matrix measure based on a matrixnorm. This measure, named complexity index, has beenpresented by Amaral et al., in 2007. In their study the authorsconsider that an economic system is represented by a squarenonnegative matrix and their aim is to quantify complexity as

Page 84: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 84

interdependence in an IO system. However for large dimensionmatrices the computation of this complexity index is almostimpossible. In order to contribute to a better perception of thismeasure in an economic context and to overcome this difficultywe present some properties and bounds for this index.Anabela BorgesDept. of MathematicsUniversity of Tras-os Montes e Alto [email protected]

Teresa Pedroso de LimaFaculty of EconomicsUniversity of [email protected]

Talk 2. A matrix pencil tool to solve a sampling problemIn sampling theory, the available data are often samples of someconvolution operator acting on the function itself. In this talk weuse the oversampling technique for obtaining sampling formulasinvolving these samples and having compactly supportedreconstruction functions. The original problem reduces tofinding a polynomial left inverse of a matrix pencil intimatelyrelated to the sampling problem. We obtain conditions forcomputing such reconstruction functions to be possible in apractical way when the oversampling rate is minimum. Thissolution is not unique, but there is no solution with fewernon-zero consecutive terms than the one obtained.Alberto PortalDept. of Applied [email protected]

Antonio G. GarcıaDept. of MathematicsUniversidad Carlos III de [email protected]

Miguel Angel Hernandez-MedinaDept. of Applied [email protected]

Gerardo Perez-VillalonDept. of Applied [email protected]

Talk 3. A duality relation for matrix pencils with applicationsto linearizationsLet A(x) =

∑di=0Aix

i be a n× n regular matrix polynomial,Q ∈ C(n+1)d,(n+1)d the QR factor of

A0

A1

...Ad

,and QSE (resp. QNE) the nd× nd matrix obtained by removingthe first n columns and the first (resp. last) n rows from Q.Then, QSE − xQNE is a linearization of A(x). This interestingresult is a consequence of a new method for generatingequivalent pencils, which is related to the so-called “pencilarithmetic” [Benner, Byers, ’06]. This provides a new interestingframework for studying many known families of linearizations(Fiedler pencils, M4 vector spaces). In the talk we present thistechnique and study the numerical stability of the QR-basedlinearization above.Federico PoloniInstitut fur Mathematik

Technische Universitat [email protected]

Talk 4. Stability of reducing subspaces of a pencilLet λB −A be a pencil of m× n complex matrices. We callreducing subspace of the pencil λB −A to any subspaceN ofCn such that

dim(AN +BN ) = dimN − dimC(λ) Ker(λB −A)

where C(λ) is the field of rational fractions in λ (P. Van Dooren,1983).In this talk we analyze the existence of stable reducingsubspaces of the pencil λB −A, except for the case in whichλB −A has only one column minimal index and only one rowminimal index, and this last index is nonzero, as a completesystem of invariants for the strict equivalence.Gorka ArmentiaDept. of Mathematics and Computer EngineeringPublic University of Navarre, [email protected]

Juan-Miguel GraciaDept. of Applied Mathematics and StatisticsThe University of the Basque Country, [email protected]

Francisco E. VelascoDept. of Applied Mathematics and StatisticsThe University of the Basque Country, [email protected]

CP 22. Matrix functions

Talk 1. Improved Schur-Pade algorithm for fractional powersof a matrixWe present several improvements to the Schur-Pade algorithmfor computing arbitrary real powers As of a matrix A ∈ Cn×ndeveloped by the first two authors in [SIAM J. Matrix Anal.Appl., 32 (2011), pp. 1056-1078]. We utilize an error bound interms of the quantities ‖Ap‖1/p, for several small integers p,instead of ‖A‖, extend the algorithm to allow real arithmetic tobe used throughout when the matrix is real, and provide analgorithm that computes both As and the Frechet derivative at Ain the direction E at a cost no more than three times that forcomputing As alone. These improvements put the algorithm’sdevelopment on a par with the latest (inverse) scaling andsquaring algorithms for the matrix exponential and logarithm.Lijing LinSchool of MathematicsThe University of Manchester, [email protected]

Nicholas J. HighamSchool of MathematicsThe University of Manchester, [email protected]

Edvin DeadmanUniversity of Manchester, UKNumerical Algorithms Group, [email protected]

Talk 2. An automated version of rational Arnoldi for Markovmatrix functionsThe Rational Arnoldi method is powerful one for approximatingfunctions of large sparse matrices times a vector f(A)v bymeans of Galerkin projection onto a subspacespan (A+ s1I)−1v, . . . , (A+ smI)−1v. The selection ofasymptotically optimal shifts sj for this method is crucial for itsconvergence rate. We present and investigate a heuristic for the

Page 85: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

85 2012 SIAM Conference on Applied Linear Algebra

automated shift selection when the function to be approximatedis of Markov type, f(z) =

∫ 0

−∞dγ(x)z−x , such as the matrix square

root or other impedance functions. The performance of thisapproach is demonstrated at several numerical examplesinvolving symmetric and nonsymmetric matrices.Leonid KnizhnermanMathematical Modelling DepartmentCentral Geophysical Expedition, Moscow, [email protected]

Stefan GuttelMathematical InstituteUniversity of Oxford, [email protected]

Talk 3. Ranking hubs and authorities using matrix functionsThe notions of subgraph centrality and communicability, basedon the exponential of the adjacency matrix of the underlyinggraph, have been effectively used in the analysis of undirectednetworks. In this talk we propose an extension of these measuresto directed networks, and we apply them to the problem ofranking hubs and authorities. The extension is achieved bybipartization, i.e., the directed network is mapped onto abipartite undirected network with twice as many nodes in orderto obtain a network with a symmetric adjacency matrix. Weexplicitly determine the exponential of this adjacency matrix interms of the adjacency matrix of the original, directed network,and we give an interpretation of centrality and communicabilityin this new context, leading to a technique for ranking hubs andauthorities. The matrix exponential method for computing hubsand authorities is compared to the well known HITS algorithm,both on small artificial examples and on more realistic real-worldnetworks. This is joint work with Michele Benzi (EmoryUniversity) and Ernesto Estrada (University of Strathclyde).Christine KlymkoDept. of Mathematics and Computer ScienceEmory [email protected]

Talk 4. The geometric mean of two matrices from acomputational viewpointWe consider the geometric mean of two matrices with an eye oncomputation. We discuss and analyze several numericalalgorithms based on different properties and representations ofthe geometric mean and we show that most of them can beclassified in terms of the rational approximations of the inversesquare root function. Finally, a review of relevant applications isgiven.Bruno IannazzoDipartimento di Matematica e InformaticaUniversita di Perugia, [email protected]

CP 23. Applications

Talk 1. Study on efficient numerical simulation methods ofdynamic interaction system excited via moving contactpointsWhen a railway train is travelling on the track, vehicles and thetrack are excited via moving contact points between wheels anda rail. A numerical simulation of this dynamic interaction isformulated to the problem of solving a large-scale,time-dependent linear or nonlinear system of equations. For thelinear case, two methods of a direct method using theSherman-Morrison-Woodbury formula and a PCG(Preconditioned Conjugate Gradient) method have been

comparatively investigated. In the PCG method, by carrying outthe Cholesky decomposition of the time-independent part of thecoefficient matrix and constructing the preconditioner from it, avery efficient numerical simulation has been attained. In thistalk, numerical simulation methods for nonlinear case will bealso described.Akiyoshi YoshimuraProfessor EmeritusSchool of Computer ScienceTokyo University of [email protected]

Talk 2. A matrix version of a digital signature scheme basedon pell equationCryptography had been evolved by using different types ofspecial matrices at Vandermonde matrix, febonasci Q-matrix,latin square etc. and based on these, various types ofcryptosystems and digital signature scheme have been propsed.We apply the idea of Elleptic Curve Digital Signature Algorithm(ECDSA) on the solution space of Pell equation to designeddigital signature scheme and then produce a matrix version of it.Also we compare the security of our signature and its matrixversion to DSA and ECDSA. We show that the signature schemebased on Pell equation is more efficient than its analogue toelliptic curve i.e. ECDSA.Aditya Mani MishraDept. of MathematicsMotilal Nehru National Institute of Technology, Allahabad-211004,[email protected]

S. N. PandeyDept. of MathematicsMotilal Nehru National Institute of Technology, Allahabad-211004,[email protected]

Sahadeo PadhyeDept. of MathematicsMotilal Nehru National Institute of Technology, Allahabad-211004,[email protected]

Talk 3. Accelerating electronic structure calculation with poleexpansion plus selected inversion methodKohn-Sham density functional theory (KSDFT) is the mostwidely used electronic structure theory. However, the standardmethod for solving KSDFT scales cubically with respect tosystem size. In the recently developed pole expansion plusselected inversion (PEpSI) method, KSDFT is solved byevaluating the diagonal elements of the inverse of a series ofsparse symmetric matrices, and the overall algorithm scales atmost quadratically with respect to system size for all materials.In this talk I will introduce the PEpSI method, and present theparallel performance of the new method with applications to realmaterials.Lin LinComputational Research DivisionLawrence Berkeley National [email protected]

Chao YangComputational Research DivisionLawrence Berkeley National [email protected]

Talk 4. Evaluating computer vision systemsAs computer vision systems advance technologically andbecome more pervasive, the need for more sophisticated and

Page 86: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 86

effective methods to evaluate their accuracy grows. One methodto evaluate the accuracy of a given computer vision system is tosolve the “robot-world/hand-eye calibration problem” which hasthe form AX = YB for unknown homogeneous matrices Xand Y. In this talk, I will present a closed-form solution to thisproblem using the Kronecker Product.Mili ShahDepartment of Mathematics and StatisticsLoyola University [email protected]

CP 24. Preconditioning II

Talk 1. Overlapping blocks by growing a partition withapplications to preconditioningStarting from a partitioning of an edge-weighted graph intosubgraphs, we develop a method which enlarges the respectivesets of vertices to produce a decomposition with overlappingsubgraphs. In contrast to existing methods, we propose that thevertices to be added when growing a subset are chosenaccording to a criterion which measures the strength ofconnectivity with this subset. We obtain an overlappingdecomposition of the set of variables which can be used foralgebraic Schwarz preconditioners. Numerical results show thatwith these overlapping Schwarz preconditioners we usuallysubstantially improve GMRES convergence when comparedwith preconditioners based on a non-overlapping decomposition,an overlapping decomposition based in adjacency sets withoutother criteria, or incomplete LU.Stephen D. ShankDepartment of MathematicsTemple University, [email protected]

David FritzscheFaculty of Mathematics and ScienceBergische Universitat [email protected]

Andreas FrommerFaculty of Mathematics and ScienceBergische Universitat [email protected]

Daniel B. SzyldDepartment of MathematicsTemple University, [email protected]

Talk 2. Communication avoiding ILU(0) preconditionerIn this talk we present a general communication-avoidingIncomplete LU preconditioner for the system Ax = b, wherecommunication-avoiding means reducing data movementbetween different levels of memory, in serial or parallelcomputation. We implement our method for ILU(0)preconditioners on matrices A that have a regular grid (2D5-point stencil,...). The matrix A is reordered using nesteddissection, then a special block and separator reordering isapplied that allows to avoid communication. Finally, we showthat our reordering of A does not affect the convergence rate ofthe ILU preconditioned systems as compared to the naturalordering of A, while it reduces data movement and improves thetime needed for convergence.Sophie MoufawadINRIA Saclay,University Paris 11, [email protected]

Laura Grigori

INRIA Saclay,University Paris 11, [email protected]

Talk 3. Preconditioning for large scale µFE analysis of boneporoelasticityThe mixed finite element discretization of Biot’s model ofporoelasticity in the displacement-flow-pressure (u-f -p)formulation leads to linear systems of the formAuu O AT

pu

O Aff ATpf

Apu Apf −App

ufp

=

g0b

, (1)

where all diagonal blocks are positive definite. We solve (1) withMINRES and GMRES and preconditioners composed of AMGpreconditioners for the individual diagonal blocks. We showoptimality of the preconditioners and scalability of the parallelsolvers. We also discuss more general inner-outer iterations.Peter ArbenzComputer Science DepartmentETH [email protected]

Cyril FlaigComputer Science DepartmentETH [email protected]

Erhan TuranComputer Science DepartmentETH [email protected]

Talk 4. Block-triangular preconditioners for systems arisingfrom edge-preserving image restorationSignal and image restoration problems are often solved byminimizing a cost function consisting of an `2 data-fidelity termand a regularization term. We consider a class of convex andedge-preserving regularization functions. In specific,half-quadratic regularization as a fixed point iteration method isusually employed to solve this problem. We solve theabove-described signal and image restoration problems with thehalf-quadratic regularization technique by making use of theNewton method. At each iteration of the Newton method, theNewton equation is a structured system of linear equations of asymmetric positive definite coefficient matrix, and may beefficiently solved by the preconditioned conjugate gradientmethod accelerated with the modified block SSORpreconditioner. The experimental results show that this approachis more feasible and effective than the half-quadraticregularization approach.Yu-Mei HuangSchool of Mathematics and StatisticsLanzhou [email protected]

Zhong-Zhi BaiAcademy of Mathematics and Systems ScienceChinese Academy of Sciences, [email protected]

Michael K. NgDept. of MathematicsHong Kong Baptist [email protected]

CP 25. Tensors and multilinear algebra

Talk 1. Decomposition of semi-nonnegative semi-symmetricthree-way tensors based on LU matrix factorization

Page 87: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

87 2012 SIAM Conference on Applied Linear Algebra

CANDECOMP/PARAFAC (CP) decomposition ofsemi-symmetric three-way tensors is an essential tool in signalprocessing particularly in blind source separation. However, inmany applications, such as magnetic resonance spectroscopy,both symmetric modes of the three-way array are inherentlynonnegative. Existing CP algorithms, such as gradient-likeapproaches, handle symmetry and nonnegativity but none ofthem uses a Jacobi-like procedure in spite of its goodconvergence properties. First we rewrite the consideredoptimization problem as an unconstrained one. In fact, thenonnegativity constraint is ensured by means of a square changeof variable. Second we propose a Jacobi-like approach using LUmatrix factorization, which consists in formulating ahigh-dimensional optimization problem into several sequentialrational subproblems. Numerical experiments emphasize theadvantages of the proposed method, especially in the case ofdegeneracies such as bottlenecks.Lu WangInserm, UMR 642, Rennes, F-35000, FRLTSI Laboratory, University of Rennes 1, Rennes, F-35000, [email protected]

Laurent AlberaInserm, UMR 642, Rennes, F-35000, FRLTSI Laboratory, University of Rennes 1, Rennes, F-35000, [email protected]

Huazhong ShuLaboratory of Image Science and Technology,School of Computer Science and Engineering, Southeast University,Nanjing 210096, CNCentre de Recherche en Information Biomedicale Sino-Francais(CRIBs), Nanjing 210096, [email protected]

Lotfi SenhadjiInserm, UMR 642, Rennes, F-35000, FRLTSI Laboratory, University of Rennes 1, Rennes, F-35000, FRCentre de Recherche en Information Biomedicale Sino-Francais(CRIBs), Rennes, F-35000, [email protected]

Talk 2. Random matrices and tensor rank probabilitiesBy combining methods for tensors (multiway arrays) developedby ten Berge and recent results by Forrester and Mays on thenumber of real generalised eigenvalues of real random matrices,we show that the probability for a real random Gaussiann× n× 2 tensor to be of real rank n isPn = (Γ((n+ 1)/2))n/((n− 1)!(n− 2)! . . . 1!), where Γ isthe gamma function, i.e., P2 = π/4, P3 = 1/2,P4 = 33π2/210, P5 = 1/32, etc. [ref: G. Bergqvist, Lin. Alg.Appl. to appear, 2012 (doi:10.1016/j.laa.2011.02.041); G.Bergqvist and P. J. Forrester, Elect. Comm. in Probab. 16,630-637, 2011]. Previously such probabilities were only studiedusing numerical simulations. We also show that for large n,Pn = (e/4)n

2/4n1/12e−1/12−ζ′(−1)(1 +O(1/n)), where ζ isthe Riemann zeta function.Goran BergqvistDepartment of MathematicsLinkoping [email protected]

Talk 3. A new truncation strategy for the higher-ordersingular value decomposition of tensorsWe present an alternative strategy to truncate the higher-ordersingular value decomposition (T-HOSVD) [De Lathauwer et al.,SIMAX, 2000]. Our strategy is called the sequentially truncatedHOSVD (ST-HOSVD). It requires less operations to compute

and often improves the approximation error w.r.t. T-HOSVD. Inone experiment we performed, the results of a numericalsimulation of a partial differential equation were compressed byT-HOSVD and ST-HOSVD yielding similar approximationerrors. The execution time, on the other hand, was reduced from2 hours 45 minutes for T-HOSVD to one minute forST-HOSVD, representing a speedup of 133.Nick VannieuwenhovenDepartment of Computer ScienceKU [email protected]

Raf VandebrilDepartment of Computer ScienceKU [email protected]

Karl MeerbergenDepartment of Computer ScienceKU [email protected]

Talk 4. Probabilistic matrix approximationThis talk will present new results from the computer visiondomain. We define a probabilistic framework in which the fullrange of an incomplete matrix is approximated by incorporatinga prior knowledge from similar matrices. Considering thelow-rank matrix approximation inequality ‖A−QQTA‖ < ε,where the projection of A into the subspace spanned by Q isused as an approximation, the proposed framework derives Qhaving only a submatrix A(:, J) (i.e. given only a few columns)by utilizing a prior distribution p(Q). Such an approachprovides useful results in face recognition tasks when we dealwith variations like illumination and facial expression.Birkan TuncInformatics InstituteIstanbul Technical University, [email protected]

Muhittin GokmenDept. of Computer EngineeringIstanbul Technical University, [email protected]

CP 26. Eigenvalue problems III

Talk 1. Eigenvalues of matrices with prescribed entriesAn important motivation for our work is the description of thepossible eigenvalues or the characteristic polynomial of apartitioned matrix of the form A = [Ai,j ], over a field, wherethe blocks Ai,j are of type pi × pj (i, j ∈ 1, 2), when someof the blocks Ai,j are prescribed and the others are unknown.Our aim is to describe the possible list of eigenvalues of apartitioned matrix of the form C = [Ci,j ] ∈ Fn×n, where F isan arbitrary field, n = p1 + · · ·+ pk, the blocks Ci,j are of typepi × pj (i, j ∈ 1, . . . , k), and some of its blocks areprescribed and the others vary. In this talk we provide somesolutions for the following situations:(i) p1 = · · · = pk;(ii) A diagonal of blocks is prescribed;(iii) k = 3.Gloria CravoCentre for Linear Structures and CombinatoricsUniversity of Lisbon, PortugalCenter for Exact Sciences and EngineeringUniversity of Madeira, [email protected]

Talk 2. Characterizing and bounding eigenvalues of interval

Page 88: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 88

matricesIn this talk we give a characterization of the eigenvalue set of aninterval matrix and present some outer approximations on theeigenvalue set. Intervals represent measurement errors,estimation of unknown values or other kind of uncertainty.Taking into account all possible realizations of interval values isnecessary to obtain reliable bounds on eigenvalues. Thisapproach helps in robust stability checking of dynamicalsystems, and in many other areas, where eigenvalues of matriceswith inexact entries are used.Milan HladıkDept. of Applied MathematicsCharles University in [email protected]

Talk 3. Lifted polytopes methods for the computation of jointspectral characteristics of matricesWe describe new methods for computing the joint spectral radiusand the joint spectral subradius of arbitrary sets of matrices. Themethods build on two ideas previously appeared in the literature:the polytope norm iterative construction, and the liftingprocedure. Moreover, the combination of these two ideas allowsus to introduce a pruning algorithm which can importantlyreduce the computational burden. We prove several appealingtheoretical properties of our methods, and provide numericalexamples of their good behaviour.Raphael M. JungersFNRS and ICTEAM [email protected]

Antonio CiconeDipartimento di Matematica Pura ed ApplicataUniversita degli Studi dell’ [email protected]

Nicola GuglielmiDipartimento di Matematica Pura ed ApplicataUniversita degli Studi dell’ [email protected]

Talk 4. An improved dqds algorithmIn this talk we present a new deflation strategy and somemodified shift strategies for dqds algorithm. We call ourimplementation DLASQ2, which is faster and more accuratethan current routines DLASQ of LAPACK. For some toughmatrices for which DLASQ shows very slow convergence,DLASQ2 can be 2-6 times faster. For the matrices collected fortesting DLASQ and random matrices, DLASQ2 can also achievemore than 15% speed up on average. More importantly, weprove that DLASQ2 has linear complexity.Shengguo LiDept. of MathematicsNational University of Defense Technolgy, Changsha, China;and University of California, Berkeley, [email protected]@math.berkeley.edu

Ming GuDept. of MathematicsUniversity of California, Berkeley, [email protected]

Beresford ParlettDept. of MathematicsUniversity of California, Berkeley, [email protected]

CP 27. Multigrid I

Talk 1. Algebraic multigrid for solution of discrete adjoint

Reynolds-averaged Navier-Stokes (RANS) equations incompressible aerodynamicsIn this work, solution to the adjoint equations for second-orderaccurate unstructured finite volume discretizations of RANSequations is investigated. For target applications, thecorresponding linear systems are very large and bad-conditioned,and finding an efficient solver for them is a challenging task.Here, we suggest a solution approach, based on algebraicmultigrid (AMG), because AMG has potential for dealing withproblems on unstructured grids. Combining AMG with defectcorrection helps to handle second-order accurate discretizations.The approach can be used in parallel, allowing solution ofproblems involving several million grid points, which wedemonstrate on a number of test cases.Anna NaumovichInstitute of Aerodynamics and Flow TechnologyGerman Aerospace CenterBraunschweig, [email protected]

Malte ForsterFraunhofer Institute for Algorithms and Scientific ComputingSankt Augustin, [email protected]

Talk 2. Symmetric multigrid theory for deflation methodsIn this talk we present a new estimate for the speed ofconvergence of deflation methods, based on the idea ofNicolaides, for the iterative solution of linear systems ofequations. This is done by using results from classical algebraicmultigrid theory. As a further result we obtain that manyprolongation operators from multigrid methods can be used tospan the deflation subspace, which is needed for deflationmethods.H. RittichFachbereich Mathematik und NaturwissenschaftenBergische Universitat [email protected]

K. KahlFachbereich Mathematik und NaturwissenschaftenBergische Universitat [email protected]

Talk 3. Aggregation-based multilevel methods for lattice QCDIn this talk, we present a multigrid solver for application inQuantum Chromodynamics (QCD), a theory that describes thestrong interaction between subatomic particles. In QCDsimulations a substantial amount of work is spent in solvingDirac equations on regular grids. These large sparse linearsystems are often ill conditioned and typical Krylov subspacemethods (e.g. CGN, GCR, BiCGStab) tend to be slow. As asolution to their bad scaling behavior we present an aggregationbased multigrid method with a domain decomposition smootherand show numerical results for systems up to the size of 450million of unknowns.Matthias RottmannDept. of MathematicsUniversity of [email protected]

Talk 4. Adaptive algebraic multigrid methods for MarkovchainsWe present an algebraic multigrid approach for computing thestationary distribution of an irreducible Markov chain. Thismethod can be divided into two main parts, namely amultiplicative bootstrap algebraic multigrid (BAMG) setup

Page 89: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

89 2012 SIAM Conference on Applied Linear Algebra

phase and a Krylov subspace iteration, preconditioned by thepreviously developed multigrid hierarchy. In our approach, wepropose some modifications to the basic BAMG framework, e.g.,using approximations to singular vectors as test vectors for theadaptive computation of the restriction and interpolationoperators. Furthermore, new results concerning the convergenceof the preconditioned Krylov subspace iteration for the resultingsingular linear system will be shown.Sonja SokolovicDept. of MathematicsUniversity of [email protected]

CP 28. Structured matrices II

Talk 1. Structured matrices and inverse problems for discreteDirac systems with rectangular matrix potentialsInverse problems for various classical and non-classical systemsare closely related to structured operators and matrices. See,e.g., the treatment of such problems for discrete systems (wherestructured matrices appear) and additional references inA. Sakhnovich, Inverse Problems 22 (2006), 2083-2101 and ajoint work of authors: Operators and Matrices 2 (2008),201-231.In this talk we consider an important case of discrete Diracsystems with rectangular matrix potentials and essentiallydevelop in this way the results from the author’s recent paper(see Inverse Problems 28 (2012), 015010) on continuoussystems. The research was supported by the Austrian ScienceFund (FWF) under Grant no. Y330. and German ResearchFoundation (DFG) under grant no. KI 760/3-1.Alexander SakhnovichDept. of MathematicsUniversity of [email protected]

Bernd FritzscheDept. of Mathematics and InformaticsUniversity of [email protected]

Bernd KirsteinDept. of Mathematics and InformaticsUniversity of [email protected]

Inna RoitbergDept. of Mathematics and InformaticsUniversity of [email protected]

Talk 2. Applications of companion matricesCompanion matrices are commonly used to estimate or computepolynomial zeros. We obtain new polynomial zero inclusionregions by considering appropriate polynomials of companionmatrices, combined with similarity transformations. Our maintools are Gershgorin’s theorem and Rouche’s theorem: theformer to show the way and the latter to prove the results.In addition, our techniques uncover geometric aspects of Pellet’sand related theorems about the separation of zeros that wereapparently not noticed before. Along the way, we encounternontrivial root-finding problems that are currently solved withheuristic methods. We briefly show how they can be solvedwithout heuristics.Aaron MelmanDept. of Applied MathematicsSanta Clara [email protected]

Talk 3. On factorization of structured matrices and GCDevaluationThe paper gives a self-contained survey of fast algorithms forfactorization of structured matrices. Algorithms of Sylvester-and Hankel-type are discussed. Sylvester-based algorithmsreduce the required complexity of classical methods per oneorder and the Hankel-based algorithm keeps the samecomplexity with respect to the classical one, triangularizing theinitial matrices in O(n2) flops. Their connections with theevaluation of GCD (Greatest common divisor) of twopolynomials are demonstrated. The presented procedures arestudied and compared in respect of their error analysis andcomplexity. Numerical experiments performed with a widevariety of examples show the effectiveness of these algorithms interms of speed, stability and robustness.Skander BelhajUniversity of Tunis El ManarLAMSIN Laboratory, [email protected]

Dimitrios TriantafyllouHellenic Army Academy [email protected]

Mitrouli MarilenaUniversity of AthensDepartment of Mathematics, [email protected]

Talk 4. An anti-triangular factorization of symmetricmatricesIndefinite symmetric matrices occur in many applications, suchas optimization, least squares problems, partial differentialequations and variational problems. In these applications one isoften interested in computing a factorization that puts intoevidence the inertia of the matrix or possibly provides anestimate of its eigenvalues. In this talk we present an algorithmthat provides this information for any symmetric indefinitematrix by transforming it to a block anti-triangular form usingorthogonal similarity transformations. We show that thealgorithm is backward stable and that it has a complexity that iscomparable to existing matrix decompositions for denseindefinite matrices.Paul Van DoorenDept. of Mathematical EngineeringICTEAM, Universite catholique de Louvain, [email protected]

Nicola MastronardiIstituto per le Applicazioni del Calcolo ”M. Picone”CNR, Sede di Bari, [email protected]

CP 29. Miscellaneous IV

Talk 1. Structure exploited algorithm for solving palindromicquadratic eigenvalue problemsWe study a palindromic quadratic eigenvalue problem (QEP)occurring in the vibration analysis of rail tracks under excitationarising from high speed train

(λ2AT + λQ+A)z = 0,

where A,Q ∈ Cn×n and QT = Q. The coefficient matrices Qand A have special structure: the matrix Q is block tridiagonaland block Toeplitz, and the matrix A has only one nonzero q × qblock in the upper-right corner, where n = mq. In using thesolvent approach to solve the QEP, we fully exploit the special

Page 90: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 90

structures of the coefficient matrices to reduce the n× n matrixequation X +ATX−1A = Q into a q × q matrix equation ofthe same form. The numerical experiment is given to show thatour method is more efficient and has better accuracy incomputed results than existing methods.Linzhang LuSchool of Mathematical Science Xiamen University& School of Mathematics and Computer ScienceGuizhou Normal University , [email protected], [email protected]

Fei YuanSchool of Mathematical Science, Xiamen University, PRC

Talk 2. A spectral multi-level approach for eigenvalueproblems in first-principles materials science calculationsWe present a multi-level approach in Fourier space to solve theKohn-Sham equations from Density Functional Theory (DFT)using a plane wave basis and replacing the ionic cores bypseudopotentials. By increasing the cutoff energy and associatedother parameters in subsequent level, we demonstrate that thisapproach efficiently speeds up solving the Kohn-Shamequations. The method was implemented using the PARATECfirst principles plane wave code. Examples of multi-levelcalculations for bulk silicon, quantum dots and an aluminumsurface are presented. In some case using the multilevelapproach the total computation time is reduced by over 50%.The connection to multigrid approachs in real space is alsodiscussed.Andrew CanningLawrence Berkeley National LaboratoryBerkeley CA, [email protected]

Mi JiangDept. of MathematicsUniversity of California, Davis CA, [email protected]

Talk 3. Spectrum of Sylvester operators on triangular spacesof matricesWe present recent results on the spectrum of the operator Tgiven by T (X) = AX +XB defined on spaces of matriceswith triangular shape, e.g., block upper triangular matrices andother related spaces. We also discuss the spectrum of themultiplication operator X 7→ AXB on the same spaces.A.R. SourourDept. of MathematicsUniversity of Victoria, [email protected]

Talk 4. Modulus-based successive overrelaxation method forpricing american optionsSince the Chicago Board Options Exchange started to operate in1848, the trading of options has grown to tremendous scale andplays an important role in global economics. Various type ofmathematical models for the prices of different kinds of optionsare proposed during the last decades, and the valuation ofoptions has been topic of active research.Consider the Black-Scholes model for American option, a highorder compact scheme with local mesh refinement is proposedand analyzed. Then, Modulus-based successive overrelaxationmethod is taken for the solution of linear complementarityproblems from discrete Black-Scholes American options model.The sufficient condition for the convergence of proposedmethods is given. Numerical experiment further show that the

high order compact scheme is efficient, and modulus-basedsuccessive overrelaxation method is superior to the classicalprojected successive overrelaxation method.Jun-Feng YinDept. of MathematicsTongji [email protected]

Ning ZhengDept. of MathematicsTongji [email protected]

CP 30. Iterative methods II

Talk 1. On convergence of MSOR-Newton method fornonsmooth equationsMotivated by [X.Chen, On the convergence of SOR methods fornonsmooth equations. Numer. Linear Algebra Appl. 9 (2002)81-92], we further investigate a Modified SOR-Newton(MSOR-Newton) method for solving a system of nonlinearequations F (x) = 0, where F is strongly monotone, locallyLipschitz continuous but not necessarily differentiable. Theconvergence interval of the parameter in the MSOR-Newtonmethod is given. Compared with SOR-Newton method, thisinterval can be larger and the algorithm can be more stablebecause of large denominator. Furthermore, numerical examplesshow that this MSOR-Newton method can converge faster thanthe corresponding SOR-Newton method by choosing a suitableparameter.Li WangSchool of Mathematical SciencesNanjing Normal University, P.R. [email protected]

Qingshen LiSchool of Mathematical SciencesNanjing Normal University, P.R. [email protected]

Xiaoxia ZhouSchool of Mathematical SciencesNanjing Normal University, [email protected]

Talk 2. A framework for deflated BiCG and related solversFor solving ill-conditioned nonsymmetric linear algebraicsystems, we introduce a general framework for applyingaugmentation and deflation to Krylov subspace methods basedon a Petrov-Galerkin condition. In particular, the framework canbe applied to the biconjugate gradient method (BICG) and someof its generalizations, including BiCGStab and IDR(s). Ourabstract approach does not depend on particular recurrences andthus simplifies the derivation of theoretical results. It easily leadsto a variety of realizations by specific algorithms. We avoidalgorithmic details, but we show that for every method there aretwo approaches for extending it by augmentation and deflation.Martin H. GutknechtSeminar for Applied MathematicsETH Zurich, [email protected]

Andre GaulInstitute of MathematicsTU Berlin, [email protected]

Jorg LiesenInstitute of MathematicsTU Berlin, [email protected]

Page 91: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

91 2012 SIAM Conference on Applied Linear Algebra

Reinhard NabbenInstitute of MathematicsTU Berlin, [email protected]

Talk 3. Prescribing the behavior of the GMRES method andthe Arnoldi method simultaneouslyIn this talk we first show that the Ritz values generated in thesubsequent iterations of the Arnoldi method can be fullyindependent from the eigenvalues of the input matrix. We willgive a parametrization of the class of matrices and startingvectors generating prescribed Ritz values in all iterations. Thisresult can be seen as an analogue of the parametrization thatclosed a series of papers by Arioli, Greenbaum, Ptak and Strakos(published in 1994, 1996 and 1998) on prescribing GMRESresidual norm curves and spectra. In the second part of the talkwe show, using the first part, that any GMRES convergencehistory is possible with any Ritz values in all iterations, providedwe treat the GMRES stagnation case properly. We also present aparametrization of the class of matrices and right hand sidesgenerating prescribed Ritz values and GMRES residual norms inall iterations.Jurjen Duintjer TebbensDept. of Computational MethodsInstitute of Computer ScienceAcademy of Sciences of the Czech [email protected]

Gerard MeurantCEA/DIFCommissariat a l’ Energie Atomique, Paris (retired)[email protected]

Talk 4. Efficient error bounds for linear systems and rationalmatrix functionsDoes it make sense to run Lanczos on a (hermitian) tridiagonalmatrix? This talk will give the answer ‘yes’. If T is thetridiagonal matrix arising from the standard Lanczos process forthe matrix A and a starting vector v, running Lanczos on thetrailing (2m+ 1)× (2m+ 1) submatrix of T and with its startvector the m+ 1st unit vector, we obtain information that allowsto devise lower and upper bounds on the error of the Lanczosapproximations to solutins of linear systems A−1 and, moregenerally, rational function evaluations r(A)b. The approachproceeds in a manner related to work of Golub and Meurant onthe theory of moments and quadrature and allows in particular toobtain upper error bounds, provided a lower bound on thespectrum is known. The computational work is very low andindependent of the dimension of the matrix A. We will presentseveral numerical results, including linear systems and rationalapproximations to the exponential and the sign function.Andreas FrommerDept. of MathematicsBergische Universitat [email protected]

K. KahlDept. of MathematicsBergische Universitat [email protected]

Th. LippertJuliche Supercomputing CentreJuliche Research [email protected]

H. RittichDept. of MathematicsBergische Universitat Wuppertal

[email protected]

CP 31. Direct methods

Talk 1. On sparse threaded deterministic lock-free Choleskyand LDLT factorizationsWe consider the design and implementation of sparse threadeddeterministic Cholesky and LDLT factorizations usinglock-free algorithms. The approach is based on DAGrepresentation of the factorization process and uses staticscheduling mechanisms. Results show that the new solvers arecomparable in quality to the existing nondeterministic ones withmean scalability degradation of about 15% over 150 instancesfrom the University of Florida collection.Alexander AndrianovSAS Advanced Analytics DivisionSAS Institute [email protected]

Talk 2. A fast algorithm for constructing the solutionoperator for homogeneous elliptic boundary value problemsThe large sparse linear system arising from the finite element orfinite difference discretization of an elliptic PDE is typicallysolved with iterative methods such as GMRES or multigrid(often with the aid of a problem specific preconditioner). Analternative is solve the linear system directly via so-called nesteddissection or multifrontal methods. Such techniques reorder thediscretization nodes to reduce the asymptotic complexity ofGaussian elimination from O(N3) to O(N1.5) for twodimensional problems, where N is the number of discretizationpoints. By exploiting the structure in the dense matrices thatarise in the computations (using, e.g.,H-matrix arithmetic) thecomplexity can be further reduced to O(N). In this talk, we willdemonstrate that such accelerated nested dissection techniquesbecome particularly effective for homogeneous boundary valueproblems when the solution is sought on the boundary forseveral different sets of boundary data. In this case, a modifiedversion of the accelerated nested dissection scheme can executeany solve beyond the first in O(Nboundary) operations, whereNboundary denotes the number of points on the boundary.Typically, Nboundary ∼ N0.5. Numerical examples demonstratethe effectiveness of the procedure for a broad range of ellipticPDEs that includes both the Laplace and Helmholtz equations.Adrianna GillmanDept. of MathematicsDartmouth [email protected]

Per-Gunnar MartinssonDept. of Applied MathematicsUniversity of Colorado at [email protected]

Talk 3. Eliminate last variable first!When solving linear equations, textbooks typically start theelimination of variables from the top. We suggest, however, tostart from the bottom, that is, we suggest to solve for the lastvariable from the last equation, and eliminate it from all otherequations, then do the same for the second last equation, and soon. In particular, when solving the equilibrium equations arisingfrom ergodic queueing systems, starting from the top can beproven to be unstable for large systems, whereas starting fromthe bottom is stable. Heuristic arguments show that starting fromthe bottom is also preferable in other cases.Winfried Grassmann

Page 92: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 92

Dept. of Computer ScienceUniversity of Saskatchewan, [email protected]

Talk 4. Sharp estimates for the convergence rate ofOrthomin(k) for a class of linear systemsIn this work we show that the convergence rate of Orthomin(k)applied to systems of the form (I + ρU)x = b, where U is aunitary operator and 0 < ρ < 1, is less than or equal to ρ.Moreover, we give examples of operators U and ρ > 0 forwhich the asymptotic convergence rate of Orthomin(k) is exactlyρ, thus showing that the estimate is sharp. While the systemsunder scrutiny may not be of great interest in themselves, theirexistence shows that, in general, Orthomin(k) does not convergefaster than Orthomin(1). Furthermore, we give examples ofsystems for which Orthomin(k) has the same asymptoticconvergence rate as Orthomin(2) for k ≥ 2, but smaller than thatof Orthomin(1). The latter systems are related to the numericalsolution of certain partial differential equations.Andrei DraganescuDept. of Mathematics and StatisticsUniversity of Maryland, Baltimore CountyBaltimore, MD [email protected]

Florin SpinuCampbell & Co.Baltimore, MD [email protected]

CP 32. Nonlinear methods

Talk 1. On the performance of the algebraic optimizedSchwarz methods with applicationsIn this paper we investigate the performance of the algebraicoptimized Schwarz methods. These methods are based on themodification of the transmission blocks. The transmissionblocks are replaced by new blocks to improve the convergenceof the corresponding algorithms. In the optimal case,convergence in two iterations can be achieved. We are interestedin analyzing how the algebraic optimized Schwarz methodsperform as preconditioners solving differential equations. Weare also interested in their asymptotic behavior with respect tochange in the problems parameters. We present differentnumerical simulations corresponding to different type ofproblems in two- and three-dimensions.Lahcen LaayouniSchool of Science and EngineeringAl Akhawayn UniversityAvenue Hassan II, 53000P.O. Box 1630, Ifrane, [email protected]

Daniel SzyldDepartment of Mathematics, Temple UniversityPhiladelphia, Pennsylvania 19122-6094, [email protected]

Talk 2. Optimizing additive Runge-Kutta smoothers forunsteady flow problemsWe consider the solution of unsteady flow problems usingimplicit time integration schemes. Typically the appearingnonlinear systems are solved using the FAS variant of multigrid,where the steady state algorithm is reused without changes.Significant speedup can be obtained by redesigning the multigridmethod for unsteady problems. In this talk, we discusspossibilities of finding optimal smoothers from the class ofadditive Runge-Kutta schemes, using the linear advection

diffusion equation as model problem. In particular, options forthe target function are discussed, as the spectral radius of thesmoother or the spectral radius of the multigrid iteration matrix.Philipp BirkenInst. of MathematicsUniversity of [email protected]

Antony JamesonDept. of Aeronautics and AstronauticsStanford [email protected]

Talk 3. On convergence conditions of waveform relaxationmethods for linear differential-algebraic equationsWaveform relaxation methods are successful numerical methodsoriginated from the basic fixed-point idea in numerical linearalgebra for solving time-dependent differential equations, whichwas first introduced for simulating the behavior of very largeelectrical networks. For linear constant-coefficientdifferential-algebraic equations, we study the waveformrelaxation methods without demanding the boundedness of thesolutions based on infinite time interval. In particular, we deriveexplicit expressions and obtain asymptotic convergence rates ofthis class of iteration schemes under weaker assumptions, whichmay have wider and more useful application extent. Numericalsimulations demonstrate the validity of the theory.X. YangDept. of MathematicsNanjing University of Aeronautics and AstronauticsNo.29, Yudao Street, Nanjing 210016, [email protected]

Z.-Z. BaiState Key Laboratory of Scientific/Engineering ComputingInstitute of Computational Mathematics and Scientific/EngineeringComputingAcademy of Mathematics and Systems ScienceChinese Academy of SciencesP.O. Box 2719, Beijing 100190, [email protected]

Talk 4. On sinc discretization and banded preconditioningfor linear third-order ordinary differential equationsSome draining or coating fluid-flow problems and problemsconcerning the flow of thin films of viscous fluid with a freesurface can be described by third-order ordinary differentialequations. In this talk, we solve the boundary value problems ofsuch equations by sinc discretization and prove that the discretesolutions converge to the true solutions of the ordinarydifferential equations exponentially. The discrete solution isdetermined by a linear system with the coefficient matrix being acombination of Toeplitz and diagonal matrices. The system canbe effectively solved by Krylov subspace iteration methods suchas GMRES preconditioned by banded matrices. We demonstratethat the eigenvalues of the preconditioned matrix are uniformlybounded within a rectangle on the complex plane independent ofthe size of the linear system. Numerical examples are given toillustrate the effective performance of our method.Zhi-Ru RenState Key Laboratory of Scientific/Engineering ComputingInstitute of Computational Mathematics and Scientific/EngineeringComputingAcademy of Mathematics and Systems Science, Chinese Academy ofSciences, P.O. Box 2719, Beijing 100190, P.R. [email protected]

Zhong-Zhi Bai

Page 93: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

93 2012 SIAM Conference on Applied Linear Algebra

State Key Laboratory of Scientific/Engineering ComputingInstitute of Computational Mathematics and Scientific/EngineeringComputingAcademy of Mathematics and Systems Science, Chinese Academy ofSciences, P.O. Box 2719, Beijing 100190, P.R. [email protected]

Raymond H. ChanDepartment of MathematicsThe Chinese University of Hong Kong, Shatin, Hong KongEmail: [email protected]

CP 33. Matrices and graphs

Talk 1. Complex networks metrics for software systemsThe study of large software systems has recently benefited fromthe application of complex graph theory. In fact, it is possible todescribe a software system by a complex network, where nodesand edges represent software modules and the relationshipsbetween them, respectively. In our case, the goal is to study theoccurrence of bugs in the software development and to relatethem to some metrics, either old or recently introduced, used tocharacterize the nodes of a graph. This will be done through theapplication of numerical linear algebra techniques to theadjacency matrix associated to the graph.Caterina FenuDepartment of Mathematics and Computer ScienceUniversity of [email protected]

Michele MarchesiDepartment of Electrical and Electronic EngineeringUniversity of [email protected]

Giuseppe RodriguezDepartment of Mathematics and Computer ScienceUniversity of [email protected]

Roberto TonelliDepartment of Electrical and Electronic EngineeringUniversity of [email protected]

Talk 2. On Euclidean distance matrices of graphsA matrix D ∈ Rn×n is Euclidean distance matrix (EDM), ifthere exist points xi ∈ Rr , i = 1, 2, . . . , n, such thatdij = ‖xi − xj‖2. Euclidean distance matrices have manyinteresting properties, and are used in various applications inlinear algebra, graph theory, bioinformatics, e.g., wherefrequently a question arises, what can be said about aconfiguration of points xi, if only distances between them areknown.In this talk some results on Euclidean distance matrices, arisingfrom the distances in graphs, will be presented. In particular, thedistance spectrum of the matrices will be analysed for somefamilies of graphs and it will be proven, that their distancematrices are EDM. A generalization to distance matrices ofweighted graphs will be tackled.Jolanda ModicFMF, University of Ljubljana, Slovenia,XLAB d.o.o., Ljubljana, [email protected]

Gasper JaklicFMF and IMFMUniversity of Ljubljana, Slovenia,IAM, University of Primorska, [email protected]

Talk 3. Evaluating matrix functions by resummations ongraphs: the method of path-sums

We introduce the method of path-sums which is a tool foranalytically evaluating a function of a square matrix, based onthe closed-form resummation of infinite families of terms in thecorresponding Taylor series. For finite matrices, our approachyields the exact result in a finite number of steps. We achievethis by combining a mapping between matrix powers and walkson a weighted graph with a universal result on the structure ofsuch walks. This result reduces a sum over weighted walks to asum over weighted paths, a path being forbidden to visit anyvertex more than once.Pierre-Louis GiscardDept. of PhysicsUniversity of [email protected]

Simon ThwaiteDept. of PhysicsUniversity of [email protected]

Dieter JakschDept. of PhysicsUniversity of Oxford,Centre for Quantum TechnologiesNational University of [email protected]

Talk 4. An estimation of general interdependence in an openlinear structureThe open linear structures have many applications in sciences.Especially, the social sciences - including economics - havemade extensive use of them.Starting from the biunivocal correspondence between a squarematrix and a graph, this paper aims to establish several theoremslinking components, sub-structures, loops and circuits of thegraph with certain characteristics of the exchange matrix. Thatcorrespondence focuses particularly on the determinant of thematrix and its sub-matrices. Hence, the determinant is a specificfunction of the possible arrangements in the graph (open, closed,linear, triangular, circular, autarkic, etc.).A matrix appears as an orderly and intelligible articulation ofsub-structures, themselves divisible until elementarycoefficients. Hence it comes a possible measure of generalinterdependence between the elements of a structure. Multipleuses can be deduced (inter-industry trade, international trade,strategic positioning, pure economic theory, etc.).Roland LantnerCentre d’Economie de la SorbonneCNRS UMR 8174Universite Paris 1 Pantheon-Sorbonneand ENSTA ParisTech106-112, boulevard de l’HopitalF - 75013 [email protected]

CP 34. PageRank

Talk 1. On the complexity of optimizing PageRankWe consider the PageRank Optimization problem in which oneseeks to maximize (or minimize) the PageRank of a node in agraph through adding or deleting links from a given subset. Theproblem is essentially an eigenvalue maximization problem andhas recently received much attention. It can be modeled as aMarkov Decision Process. We provide provably efficientmethods to solve the problem on large graphs for a number ofcases of practical importance and we show using perturbationanalysis that for a close variation of the problem, the same

Page 94: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 94

techniques have exponential worst case complexity.Romain HollandersDepartment of Mathematical Engineering, ICTEAM [email protected]

Jean-Charles DelvenneDepartment of Mathematical Engineering,ICTEAM institute, [email protected]

Raphael M. JungersFNRS and Department of Mathematical Engineering,ICTEAM institute, [email protected]

Talk 2. Optimization of the HOTS score of a website’s pagesTomlin’s HOTS algorithm is one of the methods allowing searchengines to rank web pages, taking into account the web graphstructure. It relies on a scalable iterative scheme computing thedual solution (the HOTS score) of a nonlinear network flowproblem. We study here the convergence properties of Tomlin’salgorithm as well as of some of its variants. Then, we addressthe problem of optimizing the HOTS score of a web page (orsite), given a set of controlled hyperlinks. We give a scalablealgorithm based on a low rank property of the matrix of partialderivatives of the objective function and report numerical resultson a fragment of the web graph.Olivier FercoqINRIA Saclay and CMAP-Ecole [email protected]

Stephane GaubertINRIA Saclay and CMAP-Ecole [email protected]

Talk 3. An inclusion set for the personalized PageRankThe Personalized Page Rank (PPR) was one of the firstmodifications introduced to the original formulation of thePageRank algorithm. PPR is based on a probability distributionvector that bias the PR to some nodes. PPR can be used as acentrality measure in complex networks. In this talk, we givesome theoretical results for the PPR considering a directed graphwith dangling nodes (nodes with zero out-degree). Our results,derived by using matrix analysis, lead to an inclusion set for theentries of the PPR. These bounds for the PPR, for a givendistribution for the dangling nodes, are independent of thepersonalization vector. We use these results to give a theoreticaljustification of a recent model that uses the PPR to classify usersof Social Network Sites. We give examples of how to use theseresults in some networks.Francisco PedrocheInstitut de Matematica MultidisciplinariaUniversitat Politecnica de Valencia. [email protected]

Regino CriadoDepartamento de Matematica AplicadaUniversidad Rey Juan Carlos. [email protected]

Esther GarcıaDepartamento de Matematica AplicadaUniversidad Rey Juan Carlos. [email protected]

Miguel RomanceDepartamento de Matematica AplicadaUniversidad Rey Juan Carlos. [email protected]

CP 35. Matrix equations

Talk 1. Upper bounds on the solution of the continuousalgebraic Riccati matrix equationIn this, paper, by considering the equivalent form of thecontinuous algebraic Riccati matrix equation and using matrixproperties ans inequalities, we propose new upper matrix boundsfor the solution of the continuous algebraic Riccati matrixequation. Then, we give numerical examples to show theeffectiveness of our results.Zubeyde UlukokDept. of Mathematics, Science FacultySelcuk [email protected]

Ramazan TurkmenDept. of Mathematics, Science FacultySelcuk [email protected]

Talk 2. A large-scale nonsymmetric algebraic Riccati equationfrom transport theoryWe consider the solution of the large-scale nonsymmetricalgebraic Riccati equation XCX −XD −AX +B = 0 fromtransport theory (Juang 1995), withM ≡ [D,−C;−B,A] ∈ R2n×2n being a nonsingularM-matrix. In addition, A,D are rank-1 corrections of diagonalmatrices, with the products A−1u, A−>u, D−1v and D−>vcomputable in O(n) complexity, for some vectors u and v, andB,C are rank 1. The structure-preserving doubling algorithm byGuo, Lin and Xu (2006) is adapted, with the appropriateapplications of the Sherman-Morrison-Woodbury formula andthe sparse-plus-low-rank representations of various iterates. Theresulting large-scale doubling algorithm has an O(n)computational complexity and memory requirement per iterationand converges essentially quadratically, as illustrated by thenumerical examples.Hung-Yuan FanDepartment of MathematicsNational Taiwan Normal University, Taipei 116, [email protected]

Peter Chang-Yi WengSchool of Mathematical SciencesBuilding 28, Monash University 3800, [email protected]

Eric King-wah ChuSchool of Mathematical SciencesBuilding 28, Monash University 3800, [email protected]

Talk 3. A stable variant of the biconjugate A-orthogonalresidual method for non-Hermitian linear systemsWe describe two novel iterative Krylov methods, the biconjugateA-orthogonal residual (BiCOR) and the conjugate A-orthogonalresidual squared (CORS) methods, developed from variants ofthe nonsymmetric Lanczos algorithm. We discuss boththeoretical and computational aspects of the two methods.Finally, we present an algorithmic variant of the BiCOR methodwhich exploits the composite step strategy employed in thedevelopment of the composite step BCG method, to cure one ofthe breakdowns called as pivot breakdown. The resultinginteresting variant computes the BiCOR iterates stably on theassumption that the underlying BiconjugateA-orthonormalizaion procedure does not break down.Bruno CarpentieriInstitute of Mathematics and Computing ScienceUniversity of Groningen, Groningen, The Netherlands

Page 95: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

95 2012 SIAM Conference on Applied Linear Algebra

[email protected]

Yan-Fei JingSchool of Mathematical Sciences/Institute of Computational ScienceUniversity of Electronic Science and Technology of China, Chengdu,ChinaINRIA Bordeaux Sud-OuestUniversite de Bordeaux, Talence, [email protected] or [email protected]

Ting-Zhu HuangSchool of Mathematical Sciences/Institute of Computational ScienceUniversity of Electronic Science and Technology of China, Chengdu,[email protected]

Yong DuanSchool of Mathematical Sciences/Institute of Computational ScienceUniversity of Electronic Science and Technology of China, Chengdu,[email protected]

Talk 4. On Hermitian and skew-Hermitian splitting iterationmethods for the equation AXB = CIn this talk we present new results on iteration method forsolving the linear matrix equation AXB = C. This method isformed by extending the corresponding HSS iterative methodsfor solving Ax = b. The analysis shows that the HSS iterationmethod will converge under certain assumptions. Moreover, theoptimal parameter of this iteration method is presented in thelatter part of this paper. Numerical results for the new methodshow that this new method is more efficient and robust comparedwith the existing methods.Xiang WangDept. of MathematicsUniversity of [email protected]

CP 36. Positivity II

Talk 1. A note on B-matrices and doubly B-matricesA matrix with positive row sums and all its off-diagonalelements bounded above by their corresponding row means wascalled in Pena, J.M., A class of P-matrices with applications tothe localization of the eigenvalues of a real matrix, SIAM J.Matrix Anal. Appl. 22, 1027-1037 (2001) a B-matrix. In Pena,J.M., On an alternative to Gerschgorin circles and ovals ofCassini, Numer. Math. 95, 337–345 (2003), the class of doublyB-matrices was introduced as a generalization of the previous.In this talk we present several characterizations and properties ofthese matrices and for each of these classes we considercorresponding questions for subdirect sums of two matrices (ageneral ‘sum’ of matrices introduced by S.M. Fallat and C.R.Johnson, of which the direct sum and ordinary sums are specialcases).C. Mendes AraujoCentro de MatematicaUniversidade do [email protected]

Juan R. TorregrosaDepto. de Matematica AplicadaUniversidad Politecnica de [email protected]

Talk 2. Accurate computations for rationalBernstein-Vandermonde and Said-Ball-VandermondematricesGasca and Pena showed that nonsingular totally nonnegative(TNN) matrices and its inverses admit a bidiagonaldecomposition. Koev, in a recent work, assuming that the

bidiagonal decompositions of a TNN matrix and its inverse areknown with high relative accuracy (HRA), presented algorithmsfor performing some algebraic computations with high relativeaccuracy: computing the eigenvalues and the singular values ofthe TNN matrix, computing the inverse of the TNN matrix andobtaining the solution of some linear systems whose coefficientmatrix is the TNN matrix.Rational Bernstein and Said-Ball bases are usual representationsin Computer Aided Geometric Design (CAGD). Solving some ofthe algebraic problems mentioned above for the collocationsmatrices of those bases (RBV and RSBV matrices, respectively)is important for some problems arising in CAGD. In our talk wewill show how to compute the bidiagonal decomposition of theRBV and RSBV matrices with HRA. Then we will apply Koev’salgorithms showing the accuracy of the obtained results for theconsidered algebraic problems.Jorge DelgadoDept. of Applied MathematicsUniversidad de [email protected]

Juan Manuel PenaDept. of Applied MathematicsUniversidad de [email protected]

Talk 3. On properties of combined matricesThe combined matrix of a nonsingular matrix A is the Hadamard(entry wise) product A

(A−1

)T . It’s well known that all row(column) sums of combined matrices are constant and equal toone. Recently, some results on combined matrices of variousclasses of matrices has been done in LAA-430 (2009) andLAA-435 (2011). In this work, we analyze similar properties(characterizations, positiveness, spectrum) when the matrix Abelongs to some kind of matrices.Isabel GimenezInstitut de Matematica MultidisciplinarUniversitat Politecnica de [email protected]

Marıa T. GassoInstitut de Matematica MultidisciplinarUniversitat Politecnica de [email protected]

Talk 4. Computing the Jordan blocks of irreducible totallynonnegative matricesIn 2005 Fallat and Gekhtman fully characterized the JordanCanonical Form of the irreducible totally nonnegative matrices.In particular, all nonzero eigenvalues are simple and the possibleJordan structures of the zero eigenvalues are well understoodand described. Starting with the bidiagonal decomposition ofthese matrices, we present an algorithm for computing all theeigenvalues, including the Jordan blocks, to high relativeaccuracy in what we believe is the first example of Jordanstructure being computed accurately in the presence of roundofferrors.Plamen KoevDepartment of MathematicsSan Jose State [email protected]

CP 37. Matrix computations

Talk 1. Computation of the matrix pth root and its Frechetderivative by integralsWe present new integral representations for the matrix pth root

Page 96: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 96

and its Frechet derivative and then investigate the computation ofthese functions by numerical quadrature. Three differentquadrature rules are considered: composite trapezoidal,Gauss-Legendre and adaptive Simpson. The problem ofcomputing the matrix pth root times a vector without the explicitevaluation of the pth root is also analyzed and bounds for thenorm of the matrix pth root and its Frechet derivative are derived.Joao R. CardosoDept. of Physics and MathematicsCoimbra Institute of [email protected]

Talk 2. An algorithm for the exact Fisher information matrixof vector ARMAX time series processesIn this paper an algorithm is developed for the exact Fisherinformation matrix of a vector ARMAX Gaussian process,VARMAX. The algorithm is composed by recursion equations ata vector-matrix level and some of these recursions consist ofderivatives. For that purpose appropriate differential rules areapplied. The derivatives are derived from a state space model fora vector process. The chosen representation is such that therecursions are given in terms of expectations of derivatives ofinnovations and not the process and observation disturbances.This enables us to produce an implementable algorithm for theVARMAX process. The algorithm will be illustrated by anexample.Andre KleinDept. of Quantitative EconomicsUniversity of Amsterdam, The [email protected]

Guy MelardECARES CP114/4Universite libre de Bruxelles, [email protected]

Talk 3. An algorithm to compute the matrix logarithm and itsFrechet derivative for use in condition number estimationRecently there has been a surge of interest in the logarithm fromwithin the finance, control and machine learning sectors. Webuild on work by Al-Mohy and Higham to give an algorithm forcomputing the matrix logarithm and its Frechet derivativesimultaneously. We will show that the new algorithm issignificantly more efficient than existing alternatives and explainhow it can be used to estimate the condition number of thelogarithm. We also derive a version of the algorithm that worksentirely in real arithmetic where appropriate.Samuel ReltonSchool of MathematicsUniversity of Manchester, [email protected]

Awad Al-MohyDept. of MathematicsKing Khalid University, [email protected]

Nick HighamSchool of MathematicsUniversity of Manchester, [email protected]

Talk 4. High-order iterative methods for the matrix pth rootThe main goal of this paper is to approximate the principal p-throot of a matrix by high-order iterative methods. We analyse thesemilocal convergence and the speed of convergence of thesemethods. Concerning stability, it is well known that even thesimplified Newton iteration is unstable. Despite it, we are able to

present stable versions of our algorithms. Finally, we testnumerically the methods. We check the numerical robustnessand stability of the methods by considering matrices that areclose to be singular and are badly conditioned. We findalgorithms in the family with better numerical behavior thanboth Newton and Halley methods. These two last algorithms arebasically the iterative methods proposed in the literature to solvethis problem.Sergio AmatDept. of Applied Mathematics and StatisticsU.P. [email protected]

J.A. EzquerroDepartment of Mathematics and ComputationUniversity of La [email protected]

M.A. HernandezDepartment of Mathematics and ComputationUniversity of La [email protected]

CP 38. Eigenvalue problems IV

Talk 1. An efficient way to compute the eigenvalues in aspecific region of complex planeSpectral projectors are efficient tools for extracting spectralinformation of a given matrix or a matrix pair. On the otherhand, these types of projectors have huge computational costsdue to the matrix inversions needed by the most computationmethods. The Gaussian quadratures can be combined with thesparse approximate inversion techniques to produce accurate andsparsity preserved spectral projectors for the computation of theneeded spectral information. In this talk we will show how onecan compute spectral projectors efficiently to find theeigenvalues in a specific region of the complex plane by usingthe proposed computational approach.E. Fatih YetkinInformatics InstituteIstanbul Technical [email protected]

Hasan DagInformation Technologies DepartmentKadir Has [email protected]

Murat ManguogluComputer Engineering DepartmentMiddle East Technical [email protected]

Talk 2. A divide, reduce and conquer algorithm for matrixdiagonalization in computer simulatorsWe present a new parallel algorithm for an efficient way to find asubset of eigenpairs for large hermitian matrices, such asHamiltonians used in the field of electron transport calculations.The proposed algorithm uses a Divide, Reduce and Conquer(DRC) method to decrease computational time by keeping onlythe important degrees of freedom without loss of accuracy forthe desired spectrum, using a black-box diagonalizationsubroutine (LAPACK, ScaLAPACK). Benchmarking resultsagainst diagonalization algorithms of LAPACK/ScaLAPACKwill be presented.Marios IakovidisTyndall National InstituteDept. of Electrical and Electronic EngineeringUniversity College [email protected]

Page 97: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

97 2012 SIAM Conference on Applied Linear Algebra

Giorgos FagasTyndall National InstituteUniversity College [email protected]

Talk 3. A rational Krylov method based on Newton and/orHermite interpolation for the nonlinear eigenvalue problemIn this talk we present a new rational Krylov method for solvingthe nonlinear eigenvalue problem (NLEP):

A(λ)x = 0.

The method approximates A(λ) by polynomial Newton and/orHermite interpolation. It uses a companion-type reformulation toobtain a linear generalized eigenvalue problem (GEP). This GEPis solved by a rational Krylov method, where the number ofiteration points is not fixed in advance. As a result, thecompanion form grows in each iteration. The number ofinterpolation points is dynamically chosen. Each iterationrequires a linear system solve with A(σ) where σ is the lastinterpolation point. We illustrate the method by numericalexamples and compare with residual inverse iteration. We alsogive a number of scenarios where the method performs verywell.Roel Van BeeumenDept. of Computer ScienceKU Leuven, University of Leuven, [email protected]

Karl MeerbergenDept. of Computer ScienceKU Leuven, University of Leuven, [email protected]

Wim MichielsDept. of Computer ScienceKU Leuven, University of Leuven, [email protected]

Talk 4. The rotation of eigenspaces of perturbed matrix pairsWe present new sin Θ theorems for relative perturbations ofHermitian definite generalized eigenvalue problem A− λB,where both A and B are Hermitian and B is positive definite.The rotation of eigenspaces is measured in the matrix dependentscalar product. We assess the sharpness of the new estimates interms of the effectivity quotients (the quotient of the measure ofthe perturbation with the estimator). The known sin Θ theoremsfor relative perturbations of the single matrix Hermitianeigenspace problem are included as special cases in ourapproach. We also present the upper bound for the norm ofJ-unitary matrix F (F ∗JF = J), which plays important role inthe relative perturbation theory for quasi-definite Hermitianmatrices H , where Hqd ≡ PTHP = [H11 , H12;H∗12 ,−H22]and J = diag(Ik,−In−k), for some permutation matrix P andH11 ∈ Ck×k and H22 +H∗12H

−111 H12 ∈ Cn−k×n−k positive

definite.Ninoslav TruharDepartment of MathematicsUniversity J.J. [email protected]

Luka GrubisicDepartment of MathematicsUniversity of [email protected]

Suzana MiodragovicDepartment of MathematicsUniversity J.J. [email protected]

Kresimir VeselicFernuniversitat in [email protected]

CP 39. Probabilistic equations

Talk 1. Banded structures in probabilistic evolution equationsfor ODEsQuite recently a novel approach to solve ODEs with initialconditions, using Probabilistic Evolution Equations which canbe considered as the ultimate linearisation has been proposedand investigated in many details. One of the most importantagents is the evolution matrix in this approach. The solution ofthe initial value problem of the infinite set of ODEs is basicallydetermined by this matrix. The influence of the initial conditionsis just specification of the initial point in infinite dimensionalspac. If the evolution matrix has a banded structure then thesolution can be constructed recursively and the convergenceanalysis becomes quite simplified.In this presentation we focus on the triangular evolution matriceswhich have just two diagonals. The construction of thetruncation approximants over finite submatrices and theconvergence of their sequences will be focused on.Fatih HunutluDept. of MathematicsMarmara [email protected]

N.A. BaykaraDept. of MathematicsMarmara [email protected]

Metin DemiralpInformatics InstituteIstanbul Technical [email protected]

Talk 2. Space extensions in the probabilistic evolutionequations of ODEsThe evolution matrix appearing in the method, which has beenrecently developed and based on probabilistic evolutionequations for the initial value problems of ODEs, can becontrolled in structure at the expense of the dimension increasein the space. Certain function(s) of the unknown functions aredeliberately added to the unknowns. This results in thesimplification of the right hand side functions of the ODE(s).The basic features desired to be created in the evolution aretriangularity and conicality to facilitate the construction of thesolutions.In this presentation we exemplify the utilization of the spaceextension to get the features mentioned above by focusing on thesingularity and uniqueness issues.Ercan GurvitDept. of MathematicsMarmara [email protected]

N.A. BaykaraDept. of MathematicsMarmara [email protected]

Metin DemiralpInformatics InstituteIstanbul Technical [email protected]

Talk 3. Triangularity and conicality in probabilistic evolutionequations for ODEs

Page 98: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 98

We have recently shown that all the explicit ordinary differentialequations can be investigated through infinite linear ODEsystems with a constant matrix coefficient (evolution matrix)under appropriately defined infinitely many initial conditions.The evolution matrix is the basic determinating agent for thecharacteristics of the solution. For a Taylor basis set it is inupper Hessenberg form which turns out to be triangular if theright hand side functions of the considered ODE(s) vanish at theexpansion point. Even though the triangularity facilitates theanalysis very much, the conicality, that is, the second degreemultinomial right hand side structure is the best nontrivial case(linear case can be considered trivial in this perspective) to getsimple truncation approximants. Talk will try to focus on thesetypes of issues.Metin DemiralpGroup for Science and Methods of ComputingInformatics InstituteIstanbul Technical University, Turkiye (Turkey)[email protected]

CP 40. Control systems III

Talk 1. H2 approximation of linear time-varying systemsWe consider the problem of approximating a linear time-varyingp×m discrete-time state space model S of high dimension byanother linear time-varying p×m discrete-time state spacemodel S of much smaller dimension, using an error criteriondefined over a finite time interval. We derive the gradients of thenorm of the approximation error for the case with nonzero initialstate. The optimal reduced order model is computed using afixed point iteration. We compare this to the classicalH2 normapproximation problem for the infinite horizon time-invariantcase and show that our solution extends this to the time-varyingand finite horizon case.Samuel [email protected]

Paul Van [email protected]

Talk 2. Analysis of behavior of the eigenvalues andeigenvectors of singular linear systemsLet E(p)x = A(p)x+B(p)u be a family of singular linearsystems smoothly dependent on a vector of real parametersp = (p1, . . . , pn). In this work we construct versal deformationsof the given differentiable family under an equivalence relation,providing a special parametrization of space of systems, whichcan be effectively applied to perturbation analysis. Furthermorein particular, we study of behavior of a simple eigenvalue of asingular linear system family E(p)x = A(p)x+B(p)u.Sonia TarragonaDept. de MatematicasUniversidad de [email protected]

M. Isabel Garcıa-PlanasDept. de Matematica Aplicada IUniversitat Politecnica de [email protected]

Talk 3. Stabilization of controllable planar bimodal linearsystemsWe consider planar bimodal linear systems consisting of twolinear dynamics acting on each side of a given hyperplane,

assuming continuity along the separating hyperplane.We obtain an explicit characterization of their controllability,which can be reformulated simply as det C1 · det C2 > 0, whereC1, C2 mean the controllability matrices of the subsystems. Inparticular, it is obvious from this condition that both subsystemsmust be controllable.Moreover, this condition allows us to prove that both subsystemscan be stabilized by means of the same feedback. In contrast tolinear systems, the pole assignment is not achieved for bimodallinear systems and we can only assure the stabilization of thesekind of systems.Marta PenaDepartament de Matematica Aplicada IUniversitat Politecnica de [email protected]

Josep FerrerDepartament de Matematica Aplicada IUniversitat Politecnica de [email protected]

Talk 4. A combinatorial approach to feedback equivalence oflinear systemsThe feedback class of a reachable linear control system over avector space is given by its Kronecker Invariants or equivalentlyby its Ferrers Diagram. We generalize the notion to a linearcontrol system over a vector bundle (over a compact space) andobtain also a combinatorial invariant in a semigroup. Moreoverwe point out that this new invariant may be simplified by usingalgebraic K-theory.Miguel V. CarriegosDept. de MatematicasUniversidad de [email protected]

CP 41. Miscellaneous V

Talk 1. Randomized distributed matrix computations basedon gossipingWe discuss new randomized algorithms for distributed matrixcomputations which are built on gossip-based data aggregation.In contrast to approaches where randomization in linear algebraalgorithms is primarily utilized for approximation purposes, weinvestigate the flexibility and fault tolerance of distributedalgorithms with randomized communication schedules. In ouralgorithms, each node communicates only with itsneighborhood. Thus, they are attractive for decentralized anddynamic computing networks and they can heal from hardwarefailures occurring at runtime.As case studies, we discuss distributed QR decomposition anddistributed orthogonal iteration, their performance, theirresilience to hardware failures, and the influence of asynchrony.Wilfried N. GanstererFaculty of Computer ScienceUniversity of [email protected]

Hana StrakovaFaculty of Computer ScienceUniversity of [email protected]

Gerhard NiederbruckerFaculty of Computer ScienceUniversity of [email protected]

Stefan Schulze GrotthoffFaculty of Computer Science

Page 99: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

99 2012 SIAM Conference on Applied Linear Algebra

University of [email protected]

Talk 2. A tabular methodology for matrix Padeapproximants with minimal row degreesIn this paper we propose a tabular procedure to make easier theinterpretation and application of a type of Matrix PadeApproximants associated to Scalar Component Models. Theoriginality of these approximants lies in the concept ofminimality defined and in the normalization associated.Considering matrix functions with k rows, to know the sets ofminimal row degrees associated with an approximant, we studyalgebraic properties: it is relevant the number and the position ofthe linear depending rows that are in the block of the last k rows,in certain Hankel matrices. We illustrate this procedure withseveral examples.Celina Pestano-GabinoDept. of Applied EconomicsUniversity of La [email protected]

Concepcion Gonzalez-ConcepcionDept. of Applied EconomicsUniversity of La [email protected]

Marıa Candelaria Gil-FarinaDept. of Applied EconomicsUniversity of La [email protected]

Talk 3. Sublinear randomized algorithms for skeletondecompositionsA skeleton decomposition is any factorization of the formA = CUR where C comprises columns, and R comprises rowsof A. Much is known on how to choose C, U , and R incomplexity superlinear in the number of elements of A. In thispaper we investigate the sublinear regime where much fewerelements of A are used to find the skeleton. Under anassumption of incoherence of the generating vectors of A (e.g.,singular vectors), we show that it is possible to choose rows andcolumns, and find the middle matrix U in a well-posed manner,in complexity proportional to the cube of the rank of A up to logfactors. Algorithmic variants involving rank-revealing QRdecompositions are also discussed and shown to work in thesublinear regime.Jiawei ChiuDept. of MathematicsMassachusetts Institute of [email protected]

Laurent DemanetDept. of MathematicsMassachusetts Institute of [email protected]

Talk 4. Preconditioners for strongly non-symmetric linearsystemsConsider the system Au = f, where A is non-symmetricpositive real matrix. The matrix A is decomposed in a sum of thesymmetric matrix A0 and the skew-symmetric A1 matrix. Whensolving such linear systems, difficulties grow up because thecoefficient matrices can lose the diagonal dominance property.Consider the preconditioner (TPT)P = (BC + ωKU )B−1

C (BC + ωKL), whereKL +KU = A1,KL = −K∗U , BC = B∗C . We use TPT aspreconditioner for GMRES (m) and BiCG methods and compareit with conventional SSOR preconditioner.

The standard 5-point central difference scheme on the regularmesh has been used for approximation of theconvection-diffusion equation with Dirichlet boundaryconditions. Numerical experiments of strongly nonsymmetricsystems are presented.Lev KrukierSouthern Federal University, Computer Center,Rostov-on-Don, [email protected]

Boris KrukierSouthern Federal University, Computer Center,Rostov-on-Don, [email protected]

Olga PichuginaSouthern Federal University, Computer Center,Rostov-on-Don, [email protected]

CP 42. Multigrid II

Talk 1. Adaptive smoothed aggregation multigrid fornonsymmetric matricesWe investigate algebraic multigrid (AMG) methods, in particularthose based on the smoothed aggregation approach, for solvinglinear systems Ax = b with a general, nonsymmetric matrix A.Recent results show that in this case it is reasonable to demandthat the interpolation and restriction operators are able toaccurately approximate singular vectors corresponding to thesmallest singular values of A. Therefore, we present anextension of the bootstrap AMG setup, which is geared towardsthe singular vectors of A instead of the eigenvectors as in theoriginal approach. We illustrate the performance of our methodby considering highly nonsymmetric linear systems originatingin the discretization of convection diffusion equations, whichshow that our algorithm performs very well when compared withestablished methods. In another series of numerical experiments,we present results which indicate that our method can also beused as a very efficient preconditioner for the generalizedminimal residual (GMRES) method.Marcel SchweitzerDept. of MathematicsUniversity of [email protected]

Talk 2. Local Fourier analysis for multigrid methods onsemi-structured triangular gridsSince the good performance of geometric multigrid methodsdepends on the particular choice of the components of thealgorithm, the local Fourier analysis (LFA) is often used topredict the multigrid convergence rates, and thus to designsuitable components. In the framework of semi-structured grids,LFA is applied to each triangular block of the initial unstructuredgrid to choose suitable local components giving rise to ablock-wise multigrid algorithm which becomes a very efficientsolver. The efficiency of this strategy is demonstrated for a widerange of applications. Different model problems anddiscretizations are considered.Carmen RodrigoDept. of Applied MathematicsUniversity of [email protected]

Francisco J. GasparDept. of Applied MathematicsUniversity of [email protected]

Page 100: Conference Abstracts · 2018-05-25 · 3 2012 SIAM Conference on Applied Linear Algebra Campos, Carmen, 57 Candes, Emmanuel, 53, 59` Candela, Vicente F., 67, 68 Canning, Andrew, 90

2012 SIAM Conference on Applied Linear Algebra 100

Francisco J. LisbonaDept. of Applied MathematicsUniversity of [email protected]

Pablo SalinasDept. of Applied MathematicsUniversity of [email protected]

Talk 3. Approach for accelerating the convergence ofmultigrid methods using extrapolation methodsIn this talk we present an approach for accelerating theconvergence of multigrid methods. Multigrid methods areefficient methods for solving large problems arising thediscretization of partial differential equations, both linear andnonlinear. In some cases the convergence may be slow (withsome smoothers). Extrapolation methods are of interestwhenever an iteration process converges slowly. We propose toformulate the problem as a fixed point problem and to acceleratethe convergence of fixed-point iteration by vector extrapolation.We revisit the polynomial-type vector extrapolation methods andapply them in the MGLab software.Sebastien DuminilLaboratoire de Mathematiques Pures et AppliqueesUniversite du Littoral Cote d’Opale, Calais, [email protected]

Hassane SadokLaboratoire de Mathematiques Pures et AppliqueesUniversite du Littoral Cote d’Opale, Calais, [email protected]

Talk 4. Algebraic multigrid (AMG) for saddle point systemsWe present a self-stabilizing approach to the construction ofAMG for Stokes-type saddle point systems of the form

K =

(A BBT −C

)where A > 0 and C ≥ 0. Our method is purely algebraic anddoes not rely on geometric information.We will show how to construct the interpolation and restrictionoperators P andR such that an inf-sup condition for Kautomatically implies an inf-sup condition for the coarse gridoperator KC = RKP . In addition, we give a two-gridconvergence proof.Bram MetschInstitut fur Numerische SimulationUniversitat [email protected]