The perception range of the automated vehicles is limited to the line-of-sight field of view of their on-board sensors (e.g., Cameras, Radars). The aim of Collaborative automated driving is extending automated vehicles' sensors field of view to go beyond their immediate proximity, thus mitigating perception limitations. Using this technology, vehicles extract the information of objects in their surroundings, and share it with others via DSRC V2V communication. Shared information will assist the receiving vehicles in creating extended view of their surroundings. Shared information should contain the min set of attributes that best describe the shared object. It should also be descriptive enough to provide the necessary information to fulfill safety/nonsafety applications requirements. The set should contain positional, motion and dimensional information. Accurate positional and dimensional information are not easily extractable in all driving scenarios. This paper proposes a machine learning-based approach integrated to the object tracking system, and capable of classifying and extracting 3D information of the objects considered for sharing. This method provides the dimension and the location of the center point of the tracked object as required by V2V communication. The results show the system is able to provide accurate positional and dimensional information.