ti.arc.nasa.gov/m/pub/archive/0886.pdf
도입부
1. Having new dual core or hyper-threaded processors in the personal computers, makes the testing of multi-threaded programs even more important
2. When the conditions of the bug are finally recreated the debugging itself may mask the bug(the observer effect)
3. The one most commonly associated with concurrent testing is race detection.
Tool 분류
1. Static testing technique
- Formal Verification
- Static Analysis
2. Dynamic testing technology
- Noise makers
(for example returning that no more memory is available to a memory allocation request.)
- Race and deadlock detection
- Replay
- Coverage
- Systematic state space exploration
3. Cloning
Because the same test is cloned many times, contentions are almost guaranteed.
주요 내용
다음은 논문의 주요 부분으로 어떻게 Tool을 benchmark할 수 있는가에 대한 설명한다. 우선 한 가지 도구만을 사용하기 보다는 여러 도구를 복합적으로 사용할 필요가 있다는 것을 인식해야 한다고 설명한다. 그리고 Tool을 평가하기 위한 테스트 코드를 가능한 많이 확보할 필요가 있다고 설명한다.
1. 우선 Figure1과 같이 Tool들이 나타내는 결과 내역들을 하나의 Database에 저장할 것을 제안한다. 그리고 이를 기반으로 각 도구들의 고유한 기능들을 사용할 것을 권장한다.
DB에는 다음의 정보들이 들어가게 된다.
- Interesting variables – for example variables that could be involved in races or bugs
- Possible race locations – location in the program that are suspect
- Unimportant locations – areas which are well synchronized, for example only one thread may be alive at that time
- Coverage information – database showing which coverage tasks were covered
- Traces of executions – to be used by off-line analyzers
2. 다음은 Tool을 평가할 테스트 코드 및 관련 자료들에 대한 내역
- Source code (and bytecode) in standard project format
- Test cases, and test drivers
- Documentation of the bugs in each program
- Instrumented versions of the programs to be used by noise, replay, coverage, and race applications
- Sample traces of program executions
3. 아래는 여러 Tool 들을 보관할 repository가 필요하다는 설명
The second component of the benchmark is a repository of tools, together with the observation database. This way, researchers can use a mix-and-match approach and complement their components with benchmark components to create and evaluate solutions based on the created whole.
결론내용 가운데
There are specific attempts at creating tools that are composed of a variety of technologies but they do not provide an open interface for extension and do not support the evaluation of competing tools and technologies.
'Papers > Multi_Core기반 테스트' 카테고리의 다른 글
Concurrent Bug Patterns and How to Test Them (0) | 2008.07.09 |
---|---|
Multithreaded unit testing with ConTest (0) | 2008.07.08 |
Performance Analysis for Distributed Parallel Java Programs with Aksum (0) | 2008.07.04 |
Distributed computing vs. Parallel computing (0) | 2008.07.04 |
Multi-thread in Single procerssor and Multi-processor (0) | 2008.07.04 |