Sparse representation approach to inverse halftoning in terms of DCT dictionary

Yuhri Ohta, Toshiaki Aida

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Citations (Scopus)

Abstract

The problem of inverse halftoning is approached on the basis of compressed sensing, which enables us to make significantly efficient inference through the sparse representation of data to be inferred. For this purpose, we have adopted a DCT dictionary as a basis to represent image patches. In the Bayesian formulation of the problem taking the sparse representation into account, the MAP estimate is found to lead to an inverse halftoning algorithm which can be interpreted as a linear programming problem. Numerical simulations have successfully confirmed the effectiveness of the algorithm, which allows us to conclude that the compressed sensing approach is efficient to the problem of inverse halftoning.

Original languageEnglish
Title of host publicationInternational Conference on Control, Automation and Systems
PublisherIEEE Computer Society
Pages1377-1380
Number of pages4
ISBN (Electronic)9788993215069
DOIs
Publication statusPublished - Dec 16 2014
Event2014 14th International Conference on Control, Automation and Systems, ICCAS 2014 - Gyeonggi-do, Korea, Republic of
Duration: Oct 22 2014Oct 25 2014

Publication series

NameInternational Conference on Control, Automation and Systems
ISSN (Print)1598-7833

Other

Other2014 14th International Conference on Control, Automation and Systems, ICCAS 2014
Country/TerritoryKorea, Republic of
CityGyeonggi-do
Period10/22/1410/25/14

Keywords

  • Bayesian statistical inference
  • compressed sensing
  • inverse halftoning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Sparse representation approach to inverse halftoning in terms of DCT dictionary'. Together they form a unique fingerprint.

Cite this