Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

About StridedSlice

Hayashi57
Novice
1,148 Views

I have been using a custom model, based on the YOLOX framework in TensorFlow, which I customized myself and converted to an OpenVINO IR model.

It has been in use for several years. Previously (up until OpenVINO version 2022.1.0), it was running stably.

However, after updating to OpenVINO version 2023.1.0, I have been experiencing frequent errors. The situation persists even when using the master branch on GitHub (OpenVINO 2023.3)

After building the source code from the master branch (OpenVINO 2023.1.0) and investigating the issue, I confirmed that there is a memory corruption occurring during the processing of StridedSlice.

The detailed explanation of the questionable behavior is as follows:

In the paramsInitialization method, the attrs argument is copied, and it seems that there are instances where the values of this attrs are incorrect. Based on the code containing a for loop and an if statement, attrs.begin.size() and attrs.ellipsisMask.size() should be consistent. However, when examining the contents of attrs during an error occurrence, it was observed that attrs.begin.size() was 2, while attrs.ellipsisMask.size() was 1, indicating that the value of attrs.ellipsisMask.size() was smaller. This mismatch led to entering the aforementioned if statement, causing a memory corruption and subsequently making the behavior unstable.

0 Kudos
1 Solution
Hayashi57
Novice
702 Views

Hi Aznie,

 

Thank you for your assistance.
I have confirmed that the issue has been resolved when using the program you provided.

 

Best regards,

Hayashi

View solution in original post

0 Kudos
8 Replies
Aznie_Intel
Moderator
1,116 Views

Hi Hayashi57,

 

Thanks for highlighting this. Could you share your workaround on how you confirm that there is a memory corruption during the process?

 

You may also share your model file for us to validate this from our end.

 

 

Regards,

Aznie


0 Kudos
Hayashi57
Novice
1,080 Views

 

Hi Aznie,

We are encountering sporadic errors when performing inference with the attached file (model.zip).

 

The minimal C++ code triggering the error is as follows:

Core plugin;
auto network = plugin.read_model("model.xml");
auto compiled_model = plugin.compile_model(network, "CPU");
auto infer_requests = compiled_model.create_infer_request();
std::cout << "infer begin" << std::endl;
infer_requests.infer();
std::cout << "infer end" << std::endl;
 

The output prompt image (success.jpg) when no error occurs.

The output prompt image (failure.jpg)  when an error occurs.

 

Environment:

  • OS: Windows 10 Home 22H2
  • OpenVINO: 2023.1.0-1-47b736f63ed-HEAD
0 Kudos
Aznie_Intel
Moderator
1,040 Views

 

Hi Hayashi,

 

Thanks for sharing your model. I have tested your model with Benchmark App to estimate the deep learning inference performance of your model. However, the inference seems inconsistent and only after 4 inference requests the output will generate. May I know what is your expected output from the model and which application are you running with?

 

On another note, to check the memory leak you can use tools like ‘address-sanitizer’ or ‘valgrind’. In case of indirect leaks, which cannot be caught by tools, peak RAM (VMHWM) may be tracked (you can use tests/stress_tests/memleaks_tests as a tracking tool). If you experience significant memory usage increase, report it in Github “Issues”

 

Meanwhile, we only provided support in English. Could you reshare your error in English version for us to have a better understanding? Thanks.

 

 

Regards,

Aznie


0 Kudos
Hayashi57
Novice
1,024 Views

Hi Aznie,

 

> the inference seems inconsistent
I had omitted the configuration of input and output tensors because memory leaks were occurring regardless of the content of the tensors.

 

> Could you reshare your error in English version for us to have a better understanding?
I'm sorry. However, it's not straightforward to translate because it's a Japanese OS environment's command prompt. What I wanted to convey is that an error occurred during the execution of the infer method of ov::InferRequest.

 

Memory leaks can be confirmed to occur only in the following three lines using C++ code.

Core plugin;
auto network = plugin.read_model("model.xml");
auto compiled_model = plugin.compile_model(network, "CPU"); // memory leak

To indicate the occurrence of memory leaks in the above, I added the following comment to strided_slice.cpp.

Here is a partial representation of that method:

void StridedSlice::StridedSliceCommonExecutor::paramsInitialization(const StridedSliceAttributes& attrs,
                                                                    const std::vector<MemoryCPtr>& srcMemory,
                                                                    const std::vector<MemoryCPtr>& dstMemory) {
    const auto srcBlockedMemoryDesc = srcMemory[0]->getDescWithType<BlockedMemoryDesc>();
    const auto dstBlockedMemoryDesc = dstMemory[0]->getDescWithType<BlockedMemoryDesc>();

    params.attrs = attrs;
    std::cout << "paramsInitialization function" << std::endl; // add
    std::cout << "params.attrs.ellipsisMask.size() = " << params.attrs.ellipsisMask.size() << std::endl; // add
    std::cout << "params.attrs.begin.size() = " << params.attrs.begin.size() << std::endl << std::endl; // add
void StridedSlice::StridedSliceCommonExecutor::dimsNormalization() {
    // creating new src and dst dimensions and parameters of the same size using masks
    //
    // example 1: before srcDims = [5, 6, 8, 3, 2], begin = [1, 0], end = [4, 0], stride = [1, 1]
    //            beginMask = [0, 1], endMask = [0, 1], ellipsisMask = [1, 0], newAxisMas = [0, 0], shrinkAxisMask = [0, 0]
    //            after srcDims = [5, 6, 8, 3, 2], begin = [1, 0, 0, 0, 0], end = [4, 5, 7, 2, 1], stride = [1, 1, 1, 1, 1], dstDims = [4, 6, 8, 3, 2]
    //
    // example 2: before srcDims = [5, 6, 8, 3, 2], begin = [0, 3, 0, 0, 0], end = [0, 3, 0, 0, 0], stride = [1, 1, 1, 1, 1]
    //            beginMask = [1, 0, 1, 1, 1], endMask = [1, 0, 1, 1, 1], ellipsisMask = [0, 0, 0, 0, 0], newAxisMask = [0, 0, 0, 0, 0],
    //            shrinkAxisMask = [0, 1, 0, 0, 0]
    //            after srcDims = [5, 6, 8, 3, 2], begin = [0, 3, 0, 0, 0], end = [4, 3, 7, 2, 1], stride = [1, 1, 1, 1, 1], dstDims = [5, 1, 8, 3, 2]
    //
    // example 3: before srcDims = [5, 8, 3, 2], begin = [0, 0, 0, 0], end = [0, 0, 0, 0], stride = [1, 1, 1, 1]
    //            beginMask = [1, 0, 1, 1, 1], endMask = [1, 0, 1, 1, 1], ellipsisMask = [0, 0, 0, 0, 0], newAxisMask = [0, 1, 0, 0, 0],
    //            shrinkAxisMask = [0, 0, 0, 0, 0]
    //            after srcDims = [5, 1, 8, 3, 2], begin = [0, 0, 0, 0, 0], end = [4, 0, 7, 2, 1], stride = [1, 1, 1, 1, 1], dstDims = [5, 1, 8, 3, 2]

    auto clipping = [](int& idx, const int min, const int max) {
        idx = (idx > min) ? idx : min;
        idx = (idx < max) ? idx : (max - 1);
    };

    auto correcting = [](int& dim, const size_t shift) {
        dim = dim >= 0 ? dim : shift + dim;
    };

    VectorDims newSrcDims, newDstDims;
    std::vector<int> beginTemp;
    std::vector<int> endTemp;
    std::vector<int> strideTemp;
    size_t srcIdx = 0;
    std::cout << "dimsNormalization function" << std::endl; // add
    std::cout << "params.attrs.ellipsisMask.size() = " << params.attrs.ellipsisMask.size() << std::endl; // add
    std::cout << "params.attrs.begin.size() = " << params.attrs.begin.size() << std::endl << std::endl; // add
    for (size_t axis = 0; axis < params.attrs.begin.size(); ++axis) {
        if (params.attrs.ellipsisMask[axis] == 1) {
            int nNewAxisAfterEllipses = 0;
            int nSrcAxisBeforeEllipses = 0;
            for (size_t i = 0; i < axis; ++i) {
                if (params.attrs.newAxisMask[i] != 1)
                    nSrcAxisBeforeEllipses++;
            }
            for (size_t i = axis + 1; i < params.attrs.begin.size(); ++i) {
                if (params.attrs.newAxisMask[i] == 1)
                    nNewAxisAfterEllipses++;
            }

 I custom-built OpenVINO with these modifications in strided_slice.cpp. When executing the mentioned plugin.compile_method, the command prompt displays the following output:

paramsInitialization function
params.attrs.ellipsisMask.size() = 4
params.attrs.begin.size() = 4

dimsNormalization function
params.attrs.ellipsisMask.size() = 4
params.attrs.begin.size() = 4

paramsInitialization function
params.attrs.ellipsisMask.size() = 4
params.attrs.begin.size() = 4

dimsNormalization function
params.attrs.ellipsisMask.size() = 4
params.attrs.begin.size() = 4

paramsInitialization function
params.attrs.ellipsisMask.size() = 4
params.attrs.begin.size() = 4

dimsNormalization function
params.attrs.ellipsisMask.size() = 4
params.attrs.begin.size() = 4

paramsInitialization function
params.attrs.ellipsisMask.size() = 4
params.attrs.begin.size() = 4

dimsNormalization function
params.attrs.ellipsisMask.size() = 4
params.attrs.begin.size() = 4

paramsInitialization function
params.attrs.ellipsisMask.size() = 2
params.attrs.begin.size() = 2

dimsNormalization function
params.attrs.ellipsisMask.size() = 2
params.attrs.begin.size() = 2

paramsInitialization function
params.attrs.ellipsisMask.size() = 1
params.attrs.begin.size() = 2

dimsNormalization function
params.attrs.ellipsisMask.size() = 1
params.attrs.begin.size() = 2

paramsInitialization function
params.attrs.ellipsisMask.size() = 1
params.attrs.begin.size() = 2

dimsNormalization function
params.attrs.ellipsisMask.size() = 1
params.attrs.begin.size() = 2

paramsInitialization function
params.attrs.ellipsisMask.size() = 2
params.attrs.begin.size() = 2

dimsNormalization function
params.attrs.ellipsisMask.size() = 2
params.attrs.begin.size() = 2

paramsInitialization function
params.attrs.ellipsisMask.size() = 2
params.attrs.begin.size() = 2

dimsNormalization function
params.attrs.ellipsisMask.size() = 2
params.attrs.begin.size() = 2

paramsInitialization function
params.attrs.ellipsisMask.size() = 1
params.attrs.begin.size() = 2

dimsNormalization function
params.attrs.ellipsisMask.size() = 1
params.attrs.begin.size() = 2

paramsInitialization function
params.attrs.ellipsisMask.size() = 2
params.attrs.begin.size() = 2

dimsNormalization function
params.attrs.ellipsisMask.size() = 2
params.attrs.begin.size() = 2

paramsInitialization function
params.attrs.ellipsisMask.size() = 1
params.attrs.begin.size() = 2

dimsNormalization function
params.attrs.ellipsisMask.size() = 1
params.attrs.begin.size() = 2

paramsInitialization function
params.attrs.ellipsisMask.size() = 2
params.attrs.begin.size() = 2

dimsNormalization function
params.attrs.ellipsisMask.size() = 2
params.attrs.begin.size() = 2

For example, I think the output in lines 77-79 above is incorrect.

These parameters indicate that while the array size of attrs.ellipsisMask is 1, the size of attrs.begin is 2.

In this case, the axis can take on values of 0 and 1 through a for loop. Accessing the array of ellipsisMask using axis as an index clearly leads to a memory leak!

Tracing back, at the point of the paramsInitialization function in lines 73-75, it indicates that the array size of attrs.ellipsisMask is 1, while the size of attrs.begin is 2. In other words, it suggests that the argument const StridedSliceAttributes& attrs for the paramsInitialization function might be incorrect.

 

0 Kudos
Aznie_Intel
Moderator
968 Views

Hi Hayashi57,

 

We are checking this with the engineering team and will get back to you once the information is available.

 

 

Regards,

Aznie


0 Kudos
Aznie_Intel
Moderator
748 Views

Hi Hayashi57,

 

Thanks for your patience.

Pull request #22109 contains the fix for the out of bounds memory access. Can you try this Pull Request and confirm if it fixes the issue with the incorrect inference output?

 

 

Regards,

Aznie


0 Kudos
Hayashi57
Novice
703 Views

Hi Aznie,

 

Thank you for your assistance.
I have confirmed that the issue has been resolved when using the program you provided.

 

Best regards,

Hayashi

0 Kudos
Aznie_Intel
Moderator
670 Views

Hi Hayashi57,


Thank you for confirming your problem is resolved. I am glad to hear everything is going well. This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.



Regards,

Aznie


0 Kudos
Reply