Advertise here




Advertise here

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Dynamic Mask

fluxflux Posts: 1New Users
edited January 2012 in iOS SDK Game Development
Hi Everyone,

I have searched this forum several times now, and i do not seem to find an answer to my problem.

Other people have asked the same question already but i'm under the impression the problem is not as easy to solve.

Basically what i am trying to do is the following,

I would like to put a layer on top of a picture, and then erase that layer through touches or accelerometer movement, 'scratching away' the place where a touch has occured.

Like this:
dynamicMask.jpg

Now i have got this working like this:
- (void)drawRect:(CGRect)rect {
     
     // Retrieve the graphics context
     CGContextRef context = UIGraphicsGetCurrentContext();
     
     //Get the drawing image
     CGImageRef maskImage = [self drawImageWithContext:context inRect:rect];
     
     // Get the mask from the image
     CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskImage)
                                                  , CGImageGetHeight(maskImage)
                                                  , CGImageGetBitsPerComponent(maskImage)
                                                  , CGImageGetBitsPerPixel(maskImage)
                                                  , CGImageGetBytesPerRow(maskImage)
                                                  ,  CGImageGetDataProvider(maskImage)
                                                  , NULL
                                                  , false);
     
     //make sure the images are not upside down
     CGContextTranslateCTM(context, 0, self.bounds.size.height);
     CGContextScaleCTM(context, 1.0, -1.0);
     
     //Draw the base picture
     CGContextDrawLayerInRect(context, rect, firstLayer);
     
     //Add clipping
     CGContextClipToMask(context, rect, mask);
     
     //Release the mask
     CGImageRelease(mask);

     //Release the maskImage
     CGImageRelease(maskImage);
     
     //draw the second layer
     CGContextDrawLayerInRect(context, rect, secondLayer);
}

As you can see i have a drawrect method, where i make an image based on the touches. This image is being used to get an imageMask.

I draw the first picture, lay down the mask, and then draw the second picture.

This works effectively resolving my problem.

But there are two drawbacks:
1) If i draw another thing over my last picture, this is also clipped...
2) Perhaps the most important thing, when i call my drawrect several times (from accelerometer), i lose performance, making the application impossible to use...

So basically my question is,

is there another way on resolving my problem? Do i need to look at this another way? Do i need to copy paste pixels?
Do i need alpha blending? Or do i need to look at OpenGL?

I'm hoping somebody can finally shed some light on this, because i'm really sitting in the dark here...
Post edited by flux on

Replies

  • dewhackerdewhacker Posts: 15Registered Users
    edited February 2009
    you might find this easier to implement in openGL using alpha blending, check out the GLPaint demo that apple has. You basically want to do this, but inversed.
  • indiantroyindiantroy Posts: 27Registered Users
    edited March 2009
    Hi Flux,

    I am trying to achieve the same thing you mentioned. I posted my question on this forum only but didn't get any answer unfortunately.

    I tried to understand the code snippet you have posted but it seems that it is not complete and being a newbie to the iPhone App Dev, was not able to understand it wholly.

    Would you mind sharing the complete code for me to understand the context? This would help me greatly in learning the masking thing.

    Thanks in advance,
    iTroy



    flux wrote: »
    Hi Everyone,

    I have searched this forum several times now, and i do not seem to find an answer to my problem.

    Other people have asked the same question already but i'm under the impression the problem is not as easy to solve.

    Basically what i am trying to do is the following,

    I would like to put a layer on top of a picture, and then erase that layer through touches or accelerometer movement, 'scratching away' the place where a touch has occured.

    Like this:
    dynamicMask.jpg

    Now i have got this working like this:
    - (void)drawRect:(CGRect)rect {
         
         // Retrieve the graphics context
         CGContextRef context = UIGraphicsGetCurrentContext();
         
         //Get the drawing image
         CGImageRef maskImage = [self drawImageWithContext:context inRect:rect];
         
         // Get the mask from the image
         CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskImage)
                                                      , CGImageGetHeight(maskImage)
                                                      , CGImageGetBitsPerComponent(maskImage)
                                                      , CGImageGetBitsPerPixel(maskImage)
                                                      , CGImageGetBytesPerRow(maskImage)
                                                      ,  CGImageGetDataProvider(maskImage)
                                                      , NULL
                                                      , false);
         
         //make sure the images are not upside down
         CGContextTranslateCTM(context, 0, self.bounds.size.height);
         CGContextScaleCTM(context, 1.0, -1.0);
         
         //Draw the base picture
         CGContextDrawLayerInRect(context, rect, firstLayer);
         
         //Add clipping
         CGContextClipToMask(context, rect, mask);
         
         //Release the mask
         CGImageRelease(mask);
    
         //Release the maskImage
         CGImageRelease(maskImage);
         
         //draw the second layer
         CGContextDrawLayerInRect(context, rect, secondLayer);
    }
    

    As you can see i have a drawrect method, where i make an image based on the touches. This image is being used to get an imageMask.

    I draw the first picture, lay down the mask, and then draw the second picture.

    This works effectively resolving my problem.

    But there are two drawbacks:
    1) If i draw another thing over my last picture, this is also clipped...
    2) Perhaps the most important thing, when i call my drawrect several times (from accelerometer), i lose performance, making the application impossible to use...

    So basically my question is,

    is there another way on resolving my problem? Do i need to look at this another way? Do i need to copy paste pixels?
    Do i need alpha blending? Or do i need to look at OpenGL?

    I'm hoping somebody can finally shed some light on this, because i'm really sitting in the dark here...
  • NickFalkNickFalk Posts: 20Registered Users
    edited March 2009
    Wouldn't it be possible to fake it?

    Instead of actually erasing part of the image you touch, you could copy a circular part of another (hidden) picture to top of the image you're touching?

    Just a thought...
  • bedaroncobedaronco Posts: 3New Users
    edited March 2009
    NickFalk wrote: »
    Wouldn't it be possible to fake it?

    Instead of actually erasing part of the image you touch, you could copy a circular part of another (hidden) picture to top of the image you're touching?

    Just a thought...

    How do you copy a part of an image to the top of the image showing?
  • am_ran32am_ran32 Posts: 11Registered Users
    edited June 2009
    Hi Flux,

    Did you get lucky with the masking of image with gestures over a background image. I'm hitting with a similar requirement and would appreciate your thoughts for the same.

    Cheers
    Evan
    flux wrote: »
    Hi Everyone,

    I have searched this forum several times now, and i do not seem to find an answer to my problem.

    Other people have asked the same question already but i'm under the impression the problem is not as easy to solve.

    Basically what i am trying to do is the following,

    I would like to put a layer on top of a picture, and then erase that layer through touches or accelerometer movement, 'scratching away' the place where a touch has occured.

    Like this:
    dynamicMask.jpg

    Now i have got this working like this:
    - (void)drawRect:(CGRect)rect {
         
         // Retrieve the graphics context
         CGContextRef context = UIGraphicsGetCurrentContext();
         
         //Get the drawing image
         CGImageRef maskImage = [self drawImageWithContext:context inRect:rect];
         
         // Get the mask from the image
         CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskImage)
                                                      , CGImageGetHeight(maskImage)
                                                      , CGImageGetBitsPerComponent(maskImage)
                                                      , CGImageGetBitsPerPixel(maskImage)
                                                      , CGImageGetBytesPerRow(maskImage)
                                                      ,  CGImageGetDataProvider(maskImage)
                                                      , NULL
                                                      , false);
         
         //make sure the images are not upside down
         CGContextTranslateCTM(context, 0, self.bounds.size.height);
         CGContextScaleCTM(context, 1.0, -1.0);
         
         //Draw the base picture
         CGContextDrawLayerInRect(context, rect, firstLayer);
         
         //Add clipping
         CGContextClipToMask(context, rect, mask);
         
         //Release the mask
         CGImageRelease(mask);
    
         //Release the maskImage
         CGImageRelease(maskImage);
         
         //draw the second layer
         CGContextDrawLayerInRect(context, rect, secondLayer);
    }
    

    As you can see i have a drawrect method, where i make an image based on the touches. This image is being used to get an imageMask.

    I draw the first picture, lay down the mask, and then draw the second picture.

    This works effectively resolving my problem.

    But there are two drawbacks:
    1) If i draw another thing over my last picture, this is also clipped...
    2) Perhaps the most important thing, when i call my drawrect several times (from accelerometer), i lose performance, making the application impossible to use...

    So basically my question is,

    is there another way on resolving my problem? Do i need to look at this another way? Do i need to copy paste pixels?
    Do i need alpha blending? Or do i need to look at OpenGL?

    I'm hoping somebody can finally shed some light on this, because i'm really sitting in the dark here...
  • ptiounptioun Posts: 2New Users
    edited July 2009
    Hello
    Did you manage to realize this, I'm very interested in for my app.
    if tes could you give us the solution or a part of ...

    regards
    Alex
  • milanjansarimilanjansari Posts: 239Registered Users
    edited September 2009
    flux wrote: »
    Hi Everyone,

    I have searched this forum several times now, and i do not seem to find an answer to my problem.

    Other people have asked the same question already but i'm under the impression the problem is not as easy to solve.

    Basically what i am trying to do is the following,

    I would like to put a layer on top of a picture, and then erase that layer through touches or accelerometer movement, 'scratching away' the place where a touch has occured.

    Like this:
    dynamicMask.jpg

    Now i have got this working like this:
    - (void)drawRect:(CGRect)rect {
         
         // Retrieve the graphics context
         CGContextRef context = UIGraphicsGetCurrentContext();
         
         //Get the drawing image
         CGImageRef maskImage = [self drawImageWithContext:context inRect:rect];
         
         // Get the mask from the image
         CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskImage)
                                                      , CGImageGetHeight(maskImage)
                                                      , CGImageGetBitsPerComponent(maskImage)
                                                      , CGImageGetBitsPerPixel(maskImage)
                                                      , CGImageGetBytesPerRow(maskImage)
                                                      ,  CGImageGetDataProvider(maskImage)
                                                      , NULL
                                                      , false);
         
         //make sure the images are not upside down
         CGContextTranslateCTM(context, 0, self.bounds.size.height);
         CGContextScaleCTM(context, 1.0, -1.0);
         
         //Draw the base picture
         CGContextDrawLayerInRect(context, rect, firstLayer);
         
         //Add clipping
         CGContextClipToMask(context, rect, mask);
         
         //Release the mask
         CGImageRelease(mask);
    
         //Release the maskImage
         CGImageRelease(maskImage);
         
         //draw the second layer
         CGContextDrawLayerInRect(context, rect, secondLayer);
    }
    

    As you can see i have a drawrect method, where i make an image based on the touches. This image is being used to get an imageMask.

    I draw the first picture, lay down the mask, and then draw the second picture.

    This works effectively resolving my problem.

    But there are two drawbacks:
    1) If i draw another thing over my last picture, this is also clipped...
    2) Perhaps the most important thing, when i call my drawrect several times (from accelerometer), i lose performance, making the application impossible to use...

    So basically my question is,

    is there another way on resolving my problem? Do i need to look at this another way? Do i need to copy paste pixels?
    Do i need alpha blending? Or do i need to look at OpenGL?

    I'm hoping somebody can finally shed some light on this, because i'm really sitting in the dark here...


    hello,

    if you don't mind please send me drawImageWithContext Method, i have implement this functionality.
    i had searched in google but i have not found appropriate result.

    Thank you,
  • milanjansarimilanjansari Posts: 239Registered Users
    edited September 2009
    hello,

    if you don't mind please send me drawImageWithContext Method, i have implement this functionality.
    i had searched in google but i have not found appropriate result.

    Thank you,

    hello,

    Kindly help me.

    Thank you,
  • smsawantsmsawant Posts: 82Registered Users
    edited September 2009
    flux wrote: »
    Hi Everyone,

    I have searched this forum several times now, and i do not seem to find an answer to my problem.

    Other people have asked the same question already but i'm under the impression the problem is not as easy to solve.

    Basically what i am trying to do is the following,

    I would like to put a layer on top of a picture, and then erase that layer through touches or accelerometer movement, 'scratching away' the place where a touch has occured.

    Like this:
    dynamicMask.jpg

    Now i have got this working like this:
    - (void)drawRect:(CGRect)rect {
         
         // Retrieve the graphics context
         CGContextRef context = UIGraphicsGetCurrentContext();
         
         //Get the drawing image
         CGImageRef maskImage = [self drawImageWithContext:context inRect:rect];
         
         // Get the mask from the image
         CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskImage)
                                                      , CGImageGetHeight(maskImage)
                                                      , CGImageGetBitsPerComponent(maskImage)
                                                      , CGImageGetBitsPerPixel(maskImage)
                                                      , CGImageGetBytesPerRow(maskImage)
                                                      ,  CGImageGetDataProvider(maskImage)
                                                      , NULL
                                                      , false);
         
         //make sure the images are not upside down
         CGContextTranslateCTM(context, 0, self.bounds.size.height);
         CGContextScaleCTM(context, 1.0, -1.0);
         
         //Draw the base picture
         CGContextDrawLayerInRect(context, rect, firstLayer);
         
         //Add clipping
         CGContextClipToMask(context, rect, mask);
         
         //Release the mask
         CGImageRelease(mask);
    
         //Release the maskImage
         CGImageRelease(maskImage);
         
         //draw the second layer
         CGContextDrawLayerInRect(context, rect, secondLayer);
    }
    

    As you can see i have a drawrect method, where i make an image based on the touches. This image is being used to get an imageMask.

    I draw the first picture, lay down the mask, and then draw the second picture.

    This works effectively resolving my problem.

    But there are two drawbacks:
    1) If i draw another thing over my last picture, this is also clipped...
    2) Perhaps the most important thing, when i call my drawrect several times (from accelerometer), i lose performance, making the application impossible to use...

    So basically my question is,

    is there another way on resolving my problem? Do i need to look at this another way? Do i need to copy paste pixels?
    Do i need alpha blending? Or do i need to look at OpenGL?

    I'm hoping somebody can finally shed some light on this, because i'm really sitting in the dark here...

    hi flux,

    i have two images which are overlaping on each other.(the way in which cards are placed on top of each other)

    now if i move my finger over the top most image that portion of the image should become transparent.(opacity of that part should become 0).

    i have tried following code on touch moved event to achieve this functionality which is working fine.But problem with this code is i am not getting appropriate finishing in it.
    UIGraphicsBeginImageContext(frontImage.frame.size);
    		[drawImage.image drawInRect:CGRectMake(0, 0, frontImage.frame.size.width, frontImage.frame.size.height)];
    		CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
    		CGContextSetLineWidth(UIGraphicsGetCurrentContext(),10.0);
    		CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1, 0, 0, 5);
    		CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x,lastPoint.y);
    		CGContextClearRect (UIGraphicsGetCurrentContext(), CGRectMake(lastPoint.x, lastPoint.y, frontImage.frame.size.width, frontImage.frame.size.height));
    		CGContextStrokePath(UIGraphicsGetCurrentContext());
    		CGContextFlush(UIGraphicsGetCurrentContext());
    		frontImage.image = UIGraphicsGetImageFromCurrentImageContext();
    		UIGraphicsEndImageContext();
    
    

    i does not have clear idea how can i do it?

    kindly help me out or give me some pointer.

    Thanks in advance
    You can mail me at <a href="mailto:sanketsawant1@gmail.com">sanketsawant1@gmail.com</a>
  • tej@barbhyatej@barbhya Posts: 3New Users
    edited March 2010
    smsawant wrote: »
    hi flux,

    i have two images which are overlaping on each other.(the way in which cards are placed on top of each other)

    now if i move my finger over the top most image that portion of the image should become transparent.(opacity of that part should become 0).

    i have tried following code on touch moved event to achieve this functionality which is working fine.But problem with this code is i am not getting appropriate finishing in it.
    UIGraphicsBeginImageContext(frontImage.frame.size);
    		[drawImage.image drawInRect:CGRectMake(0, 0, frontImage.frame.size.width, frontImage.frame.size.height)];
    		CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
    		CGContextSetLineWidth(UIGraphicsGetCurrentContext(),10.0);
    		CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1, 0, 0, 5);
    		CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x,lastPoint.y);
    		CGContextClearRect (UIGraphicsGetCurrentContext(), CGRectMake(lastPoint.x, lastPoint.y, frontImage.frame.size.width, frontImage.frame.size.height));
    		CGContextStrokePath(UIGraphicsGetCurrentContext());
    		CGContextFlush(UIGraphicsGetCurrentContext());
    		frontImage.image = UIGraphicsGetImageFromCurrentImageContext();
    		UIGraphicsEndImageContext();
    
    

    i does not have clear idea how can i do it?

    kindly help me out or give me some pointer.

    Thanks in advance

    Hi,
    Can you please provide me the same code to mask the image. I see the above code but didn't success to implement. So please give me that code snippet, it will be great help to me for my 3 days work effort.

    Thanks In Advance,
    Tej
  • Mr JackMr Jack Posts: 395Registered Users
    edited March 2010
    NickFalk wrote: »
    Instead of actually erasing part of the image you touch, you could copy a circular part of another (hidden) picture to top of the image you're touching?

    I'm with Nick - that's the smart way of doing it. Unless you want to have moving objects behind the mask, in which case you could write to the alpha component of the mask instead.
    <a href="http://itunes.apple.com/gb/app/alien-swing/id352732312?mt=8"; target="_blank">sig_ad.jpg</a><br />
    <br />
    Visit <a href="http://mrjackgames.co.uk/"; target="_blank">Mr Jack Games</a> for my blog and more
  • asifaliasifali Posts: 4New Users
    edited January 2011
    flux wrote: »

    Like this:
    dynamicMask.jpg

    Now i have got this working like this:
    - (void)drawRect:(CGRect)rect {
         
         // Retrieve the graphics context
         CGContextRef context = UIGraphicsGetCurrentContext();
         
         //Get the drawing image
         CGImageRef maskImage = [self drawImageWithContext:context inRect:rect];
         
         // Get the mask from the image
         CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskImage)
                                                      , CGImageGetHeight(maskImage)
                                                      , CGImageGetBitsPerComponent(maskImage)
                                                      , CGImageGetBitsPerPixel(maskImage)
                                                      , CGImageGetBytesPerRow(maskImage)
                                                      ,  CGImageGetDataProvider(maskImage)
                                                      , NULL
                                                      , false);
         
         //make sure the images are not upside down
         CGContextTranslateCTM(context, 0, self.bounds.size.height);
         CGContextScaleCTM(context, 1.0, -1.0);
         
         //Draw the base picture
         CGContextDrawLayerInRect(context, rect, firstLayer);
         
         //Add clipping
         CGContextClipToMask(context, rect, mask);
         
         //Release the mask
         CGImageRelease(mask);
    
         //Release the maskImage
         CGImageRelease(maskImage);
         
         //draw the second layer
         CGContextDrawLayerInRect(context, rect, secondLayer);
    }
    

    Hi, can you tell me how you did this ? revealing the photo behind the mask by drawing with the finger ? I gotta do some similar work, would be thankful if you can provide me some piece of code how you did this.
    Thanks.
  • mdejong1024mdejong1024 Posts: 7New Users
    edited July 2011
    I recently needed to figure out how to implement a solution like this. Here is the code I use:
    // This cropping method will render the imageToCrop into a new graphics context
    // cropped by the greyscale image defined by cropToImage. The result is returned
    // as a RGBA image. The result of this operation is an image where the greyscale
    // pixel value is converted into the alpha channel for the pixels.
    
    - (UIImage*)imageByCropping:(UIImage*)imageToCrop
                    cropToImage:(UIImage*)cropToImage
    {
      // create a context to do our clipping in
    
      CGImageRef imageToCropRef = imageToCrop.CGImage;
      CGImageRef cropToImageRef = cropToImage.CGImage;
      
      size_t width = CGImageGetWidth(imageToCropRef);
      size_t height = CGImageGetHeight(imageToCropRef);
      CGSize size = CGSizeMake(width, height);
      
      UIGraphicsBeginImageContext(size);
      CGContextRef currentContext = UIGraphicsGetCurrentContext();
      
      // Flip coordinate system
    
      CGContextTranslateCTM(currentContext, 0.0, size.height);
      CGContextScaleCTM(currentContext, 1.0, -1.0);
      
      // Create a new image that is the size of the original image.
        
      CGRect clippedRect = CGRectMake(0, 0, size.width, size.height);
      
      CGContextClipToMask( currentContext, clippedRect, cropToImageRef);
        
      CGRect drawRect = CGRectMake(0,
                                   0,
                                   imageToCrop.size.width,
                                   imageToCrop.size.height);
      
      // draw the image to our clipped context using our offset rect
      CGContextDrawImage(currentContext, drawRect, imageToCrop.CGImage);
      
      // pull the image from our cropped context
      UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
      
      // pop the context to get back to the default
      UIGraphicsEndImageContext();
      
      // Note: this is autoreleased
      return cropped;
    }
    

    The trick is to generate a mask image that is a grayscale image, then the white parts are transparent and the black parts can't be seen. Anything in between becomes the alpha channel for the resulting image.
  • new2objectivecnew2objectivec Posts: 44Registered Users
    edited July 2011
    I quite like this idea. Just my 2c:

    Not sure if it's going to work, how about overlap another view on top which has clearColour as background color and/or the "alpha" thing if required, then fill the top layer with that green colour as background, then use the touch action to erase that green background color (replace it with "clear colour") ?

    As in this way, the image is left untouched in the lower layer, while the top layer is been updated.
    <a href="http://new2objectivec.blogspot.com/2011/10/4th-open-source-game-follow-me-if-you.html"; target="_blank">My 4th Open Source Game: Follow Me If You Can!!!<img src="http://www.iphonedevsdk.com/forum/images/smilies/smile.gif"; border="0" alt="" title="
  • new2objectivecnew2objectivec Posts: 44Registered Users
    edited July 2011
    I am still new to drawing but based on the code from the following 2 links:

    http://www.iphonedevsdk.com/forum/245649-post15.html
    [Tutorial] Drawing to the screen. - iFans - iPad, iPhone, and iPod touch Fans forums

    I created a test project which might be able to achieve something "similar" to what the original question is asking. :p

    I have one bottom imageview to show an image, then add on top another view to allow user to draw with finger.

    What I achieved so far is: user can draw something with finger on top, then click on "switch" button to switch to "erase" mode to erase the colour and it will reveal the image at the bottom.

    Problem is I don't know how to fill the whole view screen with a specific colour yet. If this can be done on the upper layer before loading it, then remove the "switch" button and go straight to "erase" mode, the hidden image at the bottom will be revealed on the user's finger movement.

    It shouldn't be too difficult to fill up the whole screen with a specific colour, right? Any one, please?

    Code below, please let me know if any problem. Thanks!

    [UPDATED] Also posted on my blog: http://new2objectivec.blogspot.com/2011/07/image-mask-test-project.html

    Note: please replace the "watermelon.jpg" image name in the following code with any other image file you have. OR visit my blog for the full project code including the image.
    // based on code from http://www.iphonedevsdk.com/forum/245649-post15.html
    // and http://www.ifans.com/forums/showthread.php?t=132024
    
    
    #import <UIKit/UIKit.h>
    //------------------------------------------------------------------------------------
    //------------------------------------------------------------------------------------
    @interface TestBedViewController : UIViewController {
    	CGPoint lastPoint;
    	UIImageView *drawImage;
    	BOOL mouseSwiped;	
    	int mouseMoved;
    	BOOL drawMode;
    }
    @end
    
    @implementation TestBedViewController
    
    BOOL mouseSwiped=NO;
    
    -(void) createNormalButton: (UIButton *) buttonObj 
                        atPosX: (double) buttonPositionX
                        atPosY: (double) buttonPositionY 				  
                     withWidth: (double) buttonWidth  
                    withHeight: (double) buttonHeight 
                   withBGColor: (UIColor *) buttonBGColor 
                withTitleColor: (UIColor *) buttonTitleColor
                       withTag: (int) buttonTag 
                     withTitle: (NSString *) buttonTitle  
                  withFontSize: (int)buttonFontSize
                    withSelfID: (id)buttonSelfID 
                  withActionID: (SEL)selectorID 
                     ifEnabled: (BOOL)buttonEnabled 
                        inView: (UIView *)viewToAddTo
    
    {
    	buttonObj = [UIButton buttonWithType:UIButtonTypeRoundedRect];					
    	[buttonObj setFrame:CGRectMake(buttonPositionX, buttonPositionY, buttonWidth, buttonHeight)]; 
    	[buttonObj setTitle: buttonTitle forState:UIControlStateNormal];				
        [buttonObj.titleLabel setFont:[UIFont systemFontOfSize:buttonFontSize]];
    	[buttonObj setTag: buttonTag];    
        [buttonObj addTarget:buttonSelfID action:selectorID forControlEvents:UIControlEventTouchUpInside];
        if (buttonEnabled) {
            [buttonObj setEnabled:YES];
            
        } else {
            [buttonObj setEnabled:NO];
        }    
    	[viewToAddTo addSubview:buttonObj];													
    	[[buttonObj retain]autorelease];
    }
    
    - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
        
        mouseSwiped = NO;
        UITouch *touch = [touches anyObject];
        
        if ([touch tapCount] == 2) {
            drawImage.image = nil;
            return;
        }
    	
        lastPoint = [touch locationInView:self.view];
        lastPoint.y -= 20;
    	
    }
    
    
    - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
        mouseSwiped = YES;
        
        UITouch *touch = [touches anyObject];   
        CGPoint currentPoint = [touch locationInView:self.view];
        currentPoint.y -= 20;
        
    	
    	UIGraphicsBeginImageContext(self.view.frame.size);
    	
    	[drawImage.image drawInRect:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height)];
    	
    	if (drawMode) {
    		CGContextSetBlendMode(UIGraphicsGetCurrentContext( ),kCGBlendModeNormal);
    	} else {
    	  CGContextSetBlendMode(UIGraphicsGetCurrentContext( ),kCGBlendModeClear);
    	}
    	CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
    	CGContextSetLineWidth(UIGraphicsGetCurrentContext( ), 25.0);
    	if (drawMode) {
    	   CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1.0, 0.0, 0.0, 1.0);
    	} else {
    	   CGContextSetStrokeColorWithColor(UIGraphicsGetCurrentContext(), [[UIColor clearColor] CGColor]);
    	}
    	CGContextBeginPath(UIGraphicsGetCurrentContext());
    	CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
    	CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
    	CGContextStrokePath(UIGraphicsGetCurrentContext()) ;
    	
    	drawImage.image = UIGraphicsGetImageFromCurrentImageContext();
    	UIGraphicsEndImageContext();
    
        
        lastPoint = currentPoint;
    	
    }
    
    - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
        
        UITouch *touch = [touches anyObject];
        
        if ([touch tapCount] == 2) {
            drawImage.image = nil;
            return;
        }
        
        
        if(!mouseSwiped) {
            UIGraphicsBeginImageContext(self.view.frame.size);
            [drawImage.image drawInRect:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height)];
            CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
            CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 20.0);
            CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1.0, 1.0, 1.0, 0.5);
            CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
            CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
            CGContextStrokePath(UIGraphicsGetCurrentContext());
            CGContextFlush(UIGraphicsGetCurrentContext());
            drawImage.image = UIGraphicsGetImageFromCurrentImageContext();
            UIGraphicsEndImageContext();
        }
    }
    
    - (void)loadView {  
    	
    	self.view=[[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]];
    	
    	self.view.backgroundColor=[UIColor grayColor];
    	
    	UIImage *image=[UIImage imageNamed:@watermelon.jpg];
    	UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0.0,0.0,300,300)];
    	imageView.image = image;
    	[self.view addSubview:imageView];
    	
    	[super viewDidLoad];
    	drawImage = [[UIImageView alloc] initWithImage:nil];
    	drawImage.frame = self.view.frame;
    	[self.view addSubview:drawImage];
    	mouseMoved = 0;
    	drawMode=YES;
    		
    	UIButton *buttonBeginning;
    	
    	[self createNormalButton: buttonBeginning 
    					  atPosX: 0
    					  atPosY: 0
    				   withWidth: 50
    				  withHeight: 30			 
    				 withBGColor: [UIColor whiteColor] 
    			  withTitleColor: [UIColor blackColor] 
    					 withTag: 12345
    				   withTitle: @switch 
    				withFontSize: 15
    				  withSelfID: self 
    				withActionID: @selector(buttonPressed:)
    				   ifEnabled: YES
    					  inView: self.view];
    }
    
    - (IBAction) buttonPressed: (id) sender
    {
    	drawMode = ! drawMode;   
    }	
    
    
    - (void)dealloc {
    
    	[super dealloc];
    }
    
    @end
    //------------------------------------------------------------------------------------
    //------------------------------------------------------------------------------------
    @interface TestBedAppDelegate : NSObject <UIApplicationDelegate> {
        UIWindow *window;
        TestBedViewController *viewController;
    }
    @property (nonatomic, retain) UIWindow *window;
    @property (nonatomic, retain) TestBedViewController *viewController;
    @end
    
    @implementation TestBedAppDelegate
    
    @synthesize window;
    @synthesize viewController;
    
    
    - (void)applicationDidFinishLaunching:(UIApplication *)application {	
        
        [application setStatusBarHidden:YES];
        
    	window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];   	
        
        self.viewController = [TestBedViewController alloc];
    	[self.window addSubview:self.viewController.view];
        [self.viewController release];     
        [self.window makeKeyAndVisible];
        
    }
    
    - (void)dealloc {
        [window release];
        [super dealloc];
    }
    @end
    //------------------------------------------------------------------------------------
    //------------------------------------------------------------------------------------
    int main(int argc, char *argv[])
    {
    	NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
    	int retVal = UIApplicationMain(argc, argv, nil, @TestBedAppDelegate);
    	[pool release];
    	return retVal;
    }
    
    <a href="http://new2objectivec.blogspot.com/2011/10/4th-open-source-game-follow-me-if-you.html"; target="_blank">My 4th Open Source Game: Follow Me If You Can!!!<img src="http://www.iphonedevsdk.com/forum/images/smilies/smile.gif"; border="0" alt="" title="
  • new2objectivecnew2objectivec Posts: 44Registered Users
    edited July 2011
    Have a look at this blog Building My World | Problem. Solution. Code.

    The attached project at TestingScratchTicketEffekt.zip - 4shared.com - online file sharing and storage - download has the full working project ode.

    When started, first you get screen covered in light orange colour, if you move your finger around it will reveal the hidden image at the back!
    <a href="http://new2objectivec.blogspot.com/2011/10/4th-open-source-game-follow-me-if-you.html"; target="_blank">My 4th Open Source Game: Follow Me If You Can!!!<img src="http://www.iphonedevsdk.com/forum/images/smilies/smile.gif"; border="0" alt="" title="
  • fernandoamorimfernandoamorim Posts: 2New Users
    edited January 2012
    flux wrote: »
    Hi Everyone,

    I have searched this forum several times now, and i do not seem to find an answer to my problem.

    Other people have asked the same question already but i'm under the impression the problem is not as easy to solve.

    Basically what i am trying to do is the following,

    I would like to put a layer on top of a picture, and then erase that layer through touches or accelerometer movement, 'scratching away' the place where a touch has occured.

    Like this:
    dynamicMask.jpg

    Now i have got this working like this:
    - (void)drawRect:(CGRect)rect {
         
         // Retrieve the graphics context
         CGContextRef context = UIGraphicsGetCurrentContext();
         
         //Get the drawing image
         CGImageRef maskImage = [self drawImageWithContext:context inRect:rect];
         
         // Get the mask from the image
         CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskImage)
                                                      , CGImageGetHeight(maskImage)
                                                      , CGImageGetBitsPerComponent(maskImage)
                                                      , CGImageGetBitsPerPixel(maskImage)
                                                      , CGImageGetBytesPerRow(maskImage)
                                                      ,  CGImageGetDataProvider(maskImage)
                                                      , NULL
                                                      , false);
         
         //make sure the images are not upside down
         CGContextTranslateCTM(context, 0, self.bounds.size.height);
         CGContextScaleCTM(context, 1.0, -1.0);
         
         //Draw the base picture
         CGContextDrawLayerInRect(context, rect, firstLayer);
         
         //Add clipping
         CGContextClipToMask(context, rect, mask);
         
         //Release the mask
         CGImageRelease(mask);
    
         //Release the maskImage
         CGImageRelease(maskImage);
         
         //draw the second layer
         CGContextDrawLayerInRect(context, rect, secondLayer);
    }
    

    As you can see i have a drawrect method, where i make an image based on the touches. This image is being used to get an imageMask.

    I draw the first picture, lay down the mask, and then draw the second picture.

    This works effectively resolving my problem.

    But there are two drawbacks:
    1) If i draw another thing over my last picture, this is also clipped...
    2) Perhaps the most important thing, when i call my drawrect several times (from accelerometer), i lose performance, making the application impossible to use...

    So basically my question is,

    is there another way on resolving my problem? Do i need to look at this another way? Do i need to copy paste pixels?
    Do i need alpha blending? Or do i need to look at OpenGL?

    I'm hoping somebody can finally shed some light on this, because i'm really sitting in the dark here...





    Hello everyone!

    I'm new here on the forum have a similar problem with our friend's Flux.
    I have a background image, and need to create an image on top to be painted with the touch screen.
    The problem is that the design is not rectangular (eg, a drawing of a mouth), but it paints the whole view rectangular.

    I would just paint the picture of the mouth (for example) and not the whole rectangle.

    Someone had a similar problem and managed to solve?

    hugs to all.
  • fernandoamorimfernandoamorim Posts: 2New Users
    edited January 2012
    Hello everyone!

    I'm new here on the forum have a similar problem with our friend's Flux.
    I have a background image, and need to create an image on top to be painted with the touch screen.
    The problem is that the design is not rectangular (eg, a drawing of a mouth), but it paints the whole view rectangular.

    I would just paint the picture of the mouth (for example) and not the whole rectangle

    Someone had a similar problem and managed to solve?

    hugs to all.
  • SunehaSuneha Posts: 39New Users @
    Hi Everyone,
    I'm using similar task but top image greyscale image and bottom is same image(original).while swiping on grayscale image I should get original image by erasing grayscale image.I don't have any idea.please help me
  • dev666999dev666999 Posts: 3,512New Users @ @ @ @ @
    edited December 2014
    Suneha wrote: »
    Hi Everyone,
    I'm using similar task but top image greyscale image and bottom is same image(original).while swiping on grayscale image I should get original image by erasing grayscale image.I don't have any idea.please help me

    Draw on your grayscale image using clearColor as your color. That will make the grayscale image transparent where you draw, and reveal the color image underneath.

    Use the following with drawRect...

    CGContextSetBlendMode(context, kCGBlendModeClear);
    Post edited by dev666999 on
  • SunehaSuneha Posts: 39New Users @
    edited December 2014
    While I'm drawing I'm getting white color and grayscale image is not erasing.please help me guys....


    this is my code for converting overlay image to greyscale but what's the code to erase grayscale and get original image to top?:

    CGRect imageRect = CGRectMake(0, 0, imageView2.image.size.width, imageView2.image.size.height);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
    Create bitmap content with current image size and grayscale colorspace
    CGContextRef context = CGBitmapContextCreate(nil, imageView2.image.size.width, imageView2.image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
    CGContextDrawImage(context, imageRect, [imageView2.image CGImage]);
    CGImageRef imageRef = CGBitmapContextCreateImage(context);
    UIImage *newImage = [UIImage imageWithCGImage:imageRef];
    CGColorSpaceRelease(colorSpace);
    CGContextRelease(context);
    CFRelease(imageRef);
    imageView2.image=newImage;
  • dev666999dev666999 Posts: 3,512New Users @ @ @ @ @
    The answer is in my post above yours. Google...

    CGContextSetBlendMode(context, kCGBlendModeClear);

    And you'll find your answer, and code samples of what you are trying to do.
Sign In or Register to comment.